CN109656461B - Screen capturing method and terminal - Google Patents
Screen capturing method and terminal Download PDFInfo
- Publication number
- CN109656461B CN109656461B CN201811564821.2A CN201811564821A CN109656461B CN 109656461 B CN109656461 B CN 109656461B CN 201811564821 A CN201811564821 A CN 201811564821A CN 109656461 B CN109656461 B CN 109656461B
- Authority
- CN
- China
- Prior art keywords
- image
- splicing
- spliced
- processed
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000006870 function Effects 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 22
- 230000004044 response Effects 0.000 claims description 21
- 239000000725 suspension Substances 0.000 claims description 18
- 238000001179 sorption measurement Methods 0.000 claims description 17
- 238000007667 floating Methods 0.000 description 52
- 238000010586 diagram Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 230000009467 reduction Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000008034 disappearance Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention provides a screen capture method and a terminal, wherein the screen capture method is applied to the terminal and comprises the following steps: receiving screen capture input for capturing at least partial area in the current interface; responding to the screen capture input, determining at least partial area corresponding to the screen capture input as a screen capture selection area, and capturing an image in the screen capture selection area to obtain an image to be processed; receiving a stitching input for stitching an image to be stitched and the image to be processed; and responding to the splicing input, splicing the image to be spliced and the image to be processed to obtain a spliced image. Therefore, the user can freely select the screen capture selection area, the image to be processed obtained by capturing the screen capture selection area is spliced with the image to be spliced, the spliced image desired by the user can be obtained, and different requirements of the user are met.
Description
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a screen capturing method and a terminal.
Background
With the development of science and technology, terminals such as mobile phones and tablet computers have become indispensable tools in daily life of people. When a user needs to share or save content displayed on a screen, the content on the screen is often acquired by means of screen capture. Among them, the long screen capture image is more and more popular among users because it is larger than the general screen capture image and contains more information.
However, the existing long screenshot image is often formed by splicing a plurality of full screen images from top to bottom, and is single, so that the diversified requirements of users cannot be met, such as: the user does not want to splice multiple full-screen images, but only wants to splice multiple partial images on the screen.
Therefore, how to make a user obtain a spliced image that the user wants according to different requirements of the user is a technical problem to be solved urgently at present.
Disclosure of Invention
The embodiment of the invention provides a screen capturing method and a terminal, and aims to solve the problem that the existing spliced image cannot meet the requirements of users.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a screen capture method, which is applied to a terminal, and includes:
receiving screen capture input for capturing at least partial area in the current interface;
responding to the screen capture input, determining at least partial area corresponding to the screen capture input as a screen capture selection area, and capturing an image in the screen capture selection area to obtain an image to be processed;
receiving a stitching input for stitching an image to be stitched and the image to be processed;
and responding to the splicing input, splicing the image to be spliced and the image to be processed to obtain a spliced image.
In a second aspect, an embodiment of the present invention further provides a terminal, including:
the first receiving module is used for receiving screen capture input for capturing at least partial area in the current interface;
the intercepting module is used for responding to the screen capturing input, determining a screen capturing selection area corresponding to the screen capturing input, and intercepting an image in the screen capturing selection area to obtain an image to be processed;
the second receiving module is used for receiving splicing input used for splicing the image to be spliced and the image to be processed;
and the splicing module is used for responding to the splicing input, splicing the image to be spliced and the image to be processed to obtain a spliced image.
In a third aspect, an embodiment of the present invention further provides a terminal, including a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the screen capturing method described above.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the screen capturing method described above.
In the embodiment of the invention, the user can freely select the screen capture selection area, and the image to be processed obtained by capturing the screen capture selection area is spliced with the image to be spliced, so that the spliced image desired by the user can be obtained, and different requirements of the user are met.
Drawings
Fig. 1 is a schematic flowchart of a screen capture method according to a first embodiment of the present invention;
FIGS. 2-6 are schematic diagrams of a current interface according to embodiments of the present invention;
FIGS. 7-8 are schematic diagrams of stitching an image to be processed and an image to be stitched according to an embodiment of the present invention;
FIGS. 9-10 are schematic diagrams of a current interface according to an embodiment of the invention;
fig. 11 is a schematic structural diagram of a terminal according to a second embodiment of the present invention;
fig. 12 is a schematic structural diagram of a terminal according to a third embodiment of the present invention.
Fig. 13 is a schematic diagram of a hardware structure of a terminal implementing various embodiments of the present invention;
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings of the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the invention, are within the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic flowchart of a screen capturing method according to a first embodiment of the present invention, where the method is applied to a terminal, and includes:
step 11: receiving screen capture input for capturing at least partial area in the current interface;
step 12: responding to the screen capture input, determining at least partial area corresponding to the screen capture input as a screen capture selection area, and capturing an image in the screen capture selection area to obtain an image to be processed;
step 13: receiving a stitching input for stitching an image to be stitched and the image to be processed;
step 14: and responding to the splicing input, splicing the image to be spliced and the image to be processed to obtain a spliced image.
By adopting the screen capture method provided by the embodiment of the invention, the user can freely select the screen capture selection area, and the image to be processed obtained by capturing the screen capture selection area is spliced with the image to be spliced, so that the spliced image desired by the user can be obtained, and different requirements of the user are met.
In the embodiment of the invention, the image to be spliced is an image intercepted before the image to be processed is intercepted.
That is, the image to be stitched may be at least one image of several images taken before the image to be processed is taken.
Preferably, the image to be stitched may be an image that was last cut before the image to be processed was cut.
Of course, in other embodiments of the present invention, the image to be stitched may also be an image selected by the user from preset applications, for example: images selected by the user from an album, images downloaded from a circle of friends, etc.
In some preferred embodiments of the present invention, before the step of determining the screen capture selection area, the method further includes:
receiving a start input for starting a screen capture function;
and responding to the starting input, and displaying a selection frame for determining the screen capture selection area and/or a display tool associated with the images to be spliced on the current interface.
The start input may be a press input to a preset key of the terminal, and the preset key may be an entity key on the terminal, such as: the start input is a press input for simultaneously pressing the power key and the volume key; the preset key may also be a virtual key displayed on a touch display screen of the mobile terminal, for example: the opening input is the pressing input of a 'screen capture function opening' key in a pressing upglide menu, a sideslip menu or a pull-down menu.
Preferably, the step of displaying a display tool associated with the image to be stitched on the current interface includes:
displaying a suspension window on a current interface, wherein the images to be spliced are displayed in the suspension window; or
Displaying a suspension button on a current interface, wherein thumbnails of the images to be spliced are displayed in the suspension button; or
And displaying an adsorption button on the edge of the current interface.
Taking fig. 2-4 as an example, if the user starts the screen capture function, the selection box 22 for determining the screen capture selection area 221 and the display tool associated with the image 231 to be stitched are displayed on the current interface 21. Wherein, the display tool may be a floating window 24 for displaying the image 231 to be stitched, as shown in fig. 2; the display tool may also be a hover button 25 that displays thumbnails of the images to be stitched, as shown in FIG. 3; the display tool may also be an adsorption button 26 which is adsorbed and displayed on the edge of the current interface 21 and corresponds to the image 231 to be stitched, as shown in fig. 4.
The selection frame in the embodiment of the invention can be rectangular or oval, and of course, can also be in other shapes customized by a user, so that the selection frame is more flexible and convenient.
In other embodiments, the selection box may not be displayed on the current interface, and the screenshot selection area may be determined according to the input track of the user on the current interface, for example: and the user inputs a closed graph in the current interface, and the inner area of the closed graph is determined as the screen capture selection area. A full screen area of the current interface displayed may also be determined as the screenshot selection area. When the selection frame is not displayed, the display of the current interface is more concise and beautiful.
In the embodiment of the present invention, the screen capture input may be a combination input, such as: the combined input of the adjustment selection box and the pressing of the screen capture key is as follows: the user firstly adjusts the size, the position and the like of a selection frame on the current interface, determines at least partial area of the current interface as a screen capture selection area, and then presses a screen capture key to capture an image in the screen capture selection area; the screen capture input may also be a separate input, such as: inputting a sliding input of a closed figure, namely: the user draws a closed graph on the current interface, determines the inner area of the closed graph as a screen capture selection area, and captures the image in the screen capture selection area.
Optionally, when the current interface displays a floating window, the method further includes:
if a reduction input for reducing the floating window to a target proportion is received;
switching the hover window to the hover button in response to the zoom-out input.
Specifically, still taking fig. 2-3 as an example, after the user starts the screen capture function, the floating window 24 is displayed on the current interface 21, the image 231 to be stitched is displayed in the floating window 24, and if the user zooms out the floating window 24, the floating window 24 is zoomed out to the floating button 25.
Optionally, the target ratio is smaller than or equal to a preset ratio threshold.
That is, when the floating window is shrunk to a certain degree, it will automatically become the floating button. For example, if the user pinches the floating window and wants to reduce the image to be stitched to 29% of the original image to be stitched, the floating window automatically becomes the floating button.
Optionally, when the current interface displays a floating window or a floating button, the method further includes:
if receiving a dragging input for dragging the floating window or the floating button to the edge of the current interface;
and responding to the dragging input, switching the floating window or the floating button into the adsorption button, and displaying the adsorption button on the edge of the current interface.
Specifically, still taking fig. 2 and 4 as an example, after the user starts the screen capture function, the floating window 24 is displayed on the current interface 21, the image 231 to be stitched is displayed in the floating window 24, and if the user drags the floating window 24 to the edge (for example, the right edge) of the current interface 21, the floating window 24 automatically becomes the adsorption button 26, and is adsorbed and displayed on the edge of the current interface 21.
Still taking fig. 3-4 as an example, after the user starts the screen capture function, the hover button 25 is displayed on the current interface 21, a thumbnail of the image to be stitched is displayed in the hover button 25, and if the user drags the hover button 25 to the edge (for example, the right edge) of the current interface 21, the hover button 25 automatically changes to the suction state, that is, to the suction button 26, and is sucked and displayed on the edge of the current interface 21.
Therefore, the shielding degree of the current interface is minimum, and the user can select the screen capture selection area more conveniently.
In some preferred embodiments of the present invention, the stitching input includes a drag input dragging the image to be processed to the display tool;
the step of splicing the image to be spliced and the image to be processed in response to the splicing input to obtain a spliced image comprises the following steps:
and if the image to be processed is dragged to the area where the display tool is located and/or the peripheral area of the display tool, displaying the display tool as a splicing window with a default size, wherein the splicing window is the display tool or is obtained by switching the display tool.
Taking fig. 5-6 as an example, after the image in the screenshot selecting area 51 is intercepted, the intercepted image 52 to be processed can be held and dragged to the display tool 53 for image stitching. When the image to be processed is dragged to the area where the display tool 53 is located and/or the area around the display tool 53, no matter whether the display tool 53 is a floating window, a floating button or an adsorption button, the image to be processed is displayed as a mosaic window 54 with a default size, and the image to be mosaic 55 is displayed in the mosaic window 54. That is, if the display tool 53 is a float button or an adsorption button, the float button or the adsorption button is expanded and switched to the mosaic window 54 having a default size; if the display tool 53 is a floating window, and the size of the floating window is not a default size, for example: the size of the floating window is smaller, the floating window is enlarged and switched to a splicing window 54 with the default size; if the display tool 53 is a floating window and the size of the floating window is exactly the default size, the floating window is the mosaic window 54.
Preferably, the step of stitching the image to be stitched and the image to be processed in response to the stitching input to obtain a stitched image includes:
determining the splicing position of the image to be processed and the image to be spliced according to the termination position of the dragging input;
and splicing the image to be spliced and the image to be processed according to the splicing position to obtain the spliced image, and displaying the spliced image in the splicing window.
Specifically, still taking fig. 6 as an example, the image to be processed 52 may be dragged to any direction of the image to be stitched 55 for stitching, and may be directly stitched, such as: the termination positions of the dragging input are positioned above, below, left and right of the image to be stitched 55, so that the image to be processed 52 is stitched in four directions of the image to be stitched 55, namely, up, down, left and right; overlapping splices may also be made, such as: the termination position of the drag input is located at the lower right corner of the image to be stitched 55, so that the image to be processed 52 covers the lower right corner of the image to be stitched 55, or the image to be stitched 55 covers the image to be processed 52 for overlap stitching.
Further, the image to be spliced is formed by splicing a plurality of sub-images;
the step of determining the splicing position of the image to be processed and the image to be spliced according to the termination position of the dragging input comprises the following steps:
splitting the image to be spliced into a first image block and a second image block if the image to be processed is dragged to a first area of the image to be spliced, wherein the first area is an area within a preset range of a splicing position of the first image block and the second image block, and the first image block and the second image block both comprise at least one sub-image;
if the termination position of the dragging input is located between the first image block and the second image block, determining that the splicing position of the image to be processed is located between the first image block and the second image block;
the step of splicing the image to be spliced and the image to be processed according to the splicing position to obtain a spliced image comprises the following steps:
and inserting the image to be processed between the first image block and the second image block to obtain the spliced image.
Taking fig. 7 as an example, the image a to be stitched is formed by stitching a plurality of sub-images, and if the image B to be processed is dragged to be near the stitching position of the image a to be stitched, the stitching position in the image a to be stitched is expanded and divided into the first image block a1 and the second image block a2, so that the image B to be processed is inserted into the first image block a1 and the second image block a 2. If the end position of the drag input is located between first tile A1 and second tile A2, then pending image B is inserted between first tile A1 and second tile A2.
The step of splitting the image to be stitched into a first image block and a second image block comprises:
and displaying prompt identifiers used for assisting the splicing of the user between the first image block and the second image block.
The prompt mark can be a rectangular frame, a parallel line and other marks for assisting a user in carrying out accurate splicing.
Still taking fig. 7 as an example, according to the original splicing position of the image a to be spliced, the image a to be spliced is split into the first image block a1 and the second image block a2, and two parallel dotted lines 71 are displayed between the first image block a1 and the second image block a2, so as to guide the user to accurately splice the image B to be processed between the first image block a1 and the second image block a 2.
In the embodiment of the invention, if the spliced image exceeds the display range of the spliced window with the default size, the part of the image to be spliced can be moved to the opposite direction, so that the visual area of the spliced window is ensured to be the latest effect.
For example, the image to be stitched includes a first image block (located on the left side of the image to be stitched) and a second image block (located on the right side of the image to be stitched) which are stitched left and right, the image to be processed is stitched between the first image block and the second image block, the first image block moves left, and the second image block moves right, so as to ensure that the image to be processed is displayed in the middle of the stitching window.
In some preferred embodiments of the present invention, the image to be processed may be adjusted first, and then the adjusted image to be processed and the image to be stitched are stitched to obtain the stitched image.
Specifically, the step of stitching the image to be stitched and the image to be processed in response to the stitching input to obtain a stitched image further includes:
receiving an adjustment input for adjusting the image to be processed;
adjusting the image to be processed in response to the adjustment input;
and splicing the image to be spliced and the adjusted image to be processed according to the splicing position to obtain the spliced image.
Wherein the adjustment input comprises at least one of:
a rotation input for rotating the image to be processed;
an enlargement input for enlarging the image to be processed;
a reduction input for reducing the image to be processed;
a change input for changing a transparency of the image to be processed.
Still taking fig. 7 as an example, if the user presses again on the image B to be processed, a rotatable mark 72 may be displayed, and the user may press the rotatable mark 72 to rotate the image B to be processed. That is, the rotation input includes a press input of a heavy press on the image B to be processed and a press input of a press on the rotatable marker 72. If the user presses the image B to be processed again, the rotatable identifier is not required to be displayed, the image B to be processed directly enters a rotating state, and the image B to be processed can be rotated according to the sliding track of the user. That is, the rotation input includes a press input of a heavy press on the image B to be processed and a slide input of the image B to be processed. Thus, the user can splice in any direction, such as: and the image to be processed and the image to be spliced are spliced left and right, so that the method is more flexible and convenient and meets the requirements of users.
The zoom-in input may be a multi-finger zoom-out input, the zoom-out input may be a multi-finger pinch input, and the change input may be a slide input on the image to be processed, such as: the transparency is increased when the image to be processed slides upwards, and the transparency is reduced when the image to be processed slides downwards.
In other embodiments of the present invention, the adjustment input may further include: a change input for changing the contrast or brightness of the image to be processed, a fill input for filling text information on the image to be processed, and the like, and the present invention is not limited thereto.
In other embodiments of the present invention, the images to be stitched may also be adjusted and then stitched.
Specifically, an adjustment input (such as a rotation input, a magnification input, a reduction input, a transparency adjustment input, a contrast adjustment input and/or a brightness adjustment input) of the image to be stitched is received, and the image to be stitched is adjusted in response to the adjustment input.
Taking fig. 8 as an example, a rotation input for selecting an image to be stitched is received, each sub-image in an image a to be stitched including a plurality of sub-images stitched up and down is rotated, and then the rotated image a to be stitched and an image B to be processed are stitched.
Of course, in other preferred embodiments of the present invention, the image to be stitched and the image to be processed may be stitched to obtain a stitched image, and then the stitched image is edited and adjusted.
Specifically, after the step of stitching the image to be stitched and the image to be processed in response to the stitching input to obtain a stitched image, the method further includes:
receiving an editing input for editing the stitched image;
and responding to the editing input, editing the spliced image to obtain a screen shot image.
Specifically, the whole stitched image may be edited, the stitched image may also be split into a plurality of sub-images according to the original stitching position, and one or more of the sub-images may be edited, where the editing may include at least one of: adjusting the size, brightness, transparency, contrast and rotating the image.
In some preferred embodiments of the present invention, when the screen capture function is in the on state, the current interface further displays an operation button bar, where the operation button bar includes at least one of the following buttons: the image splicing method comprises an opening button used for controlling the display of the selection frame, a closing button used for controlling the disappearance of the selection frame, an enlarging button, a reducing button, a rotating button and a transparency adjusting button, wherein operation objects of the enlarging button, the reducing button, the rotating button and the transparency adjusting button are the images to be processed in the spliced images, the images to be spliced in the spliced images or the spliced images.
Taking fig. 9 as an example, an operation button bar 92 is further displayed on the current interface 91, an on button in the operation button bar 92 may control the selection frame 93 to display, an off button may control the selection frame 93 to disappear, and an enlargement button, a reduction button, a rotation button, and a transparency adjustment button may operate the whole stitched image, or may split the stitched image into a plurality of sub-images and operate one or more of the sub-images (including operating the image to be processed in the stitched image, or operating one or more of the sub-images in the image to be stitched).
Taking fig. 10 as an example, a mosaic window 1002 and an operation button bar 1003 of a default size are displayed on the current interface 1001, and a mosaic image 1004 obtained by mosaic of the image 10041 to be mosaiced and the image 10042 to be processed is displayed in the mosaic window 1002. Thus, the user can edit the stitched image 1004 more finely by fully using the buttons in the operation button bar 1003, such as: enlarging the image, reducing the image, rotating the image, adjusting the transparency of the image, adjusting the brightness of the image, and/or adjusting the contrast of the image.
Preferably, the step of editing the stitched image to obtain the screen shot image includes:
receiving confirmation input for confirming that the spliced image is edited;
and responding to the confirmation input, taking the edited spliced image as the screen capture image, storing the screen capture image, and restoring the spliced window into the display tool.
Specifically, after the user edits the stitched image, the stitched window is restored to the original state, such as: a floating window of original size, a floating button or a suction button.
Preferably, the screen capture method further comprises:
and if the operation button strip is dragged to the edge of the current interface, controlling the operation button strip to be adsorbed and displayed on the edge of the current interface.
That is to say, when the operation button strip is dragged to the edge of the current interface, the operation button strip is also displayed in an adsorption manner, so that the current interface display is simpler, and the image splicing and editing by a user are facilitated.
In the prior art, the screen capture image is obtained mainly by the following three methods: firstly, intercepting a full screen, namely intercepting an image of the size of the whole screen; secondly, long screen capture, namely capturing the content of the current screen, sliding down the display screen, capturing again, and splicing the images of the whole screen size captured for multiple times up and down; and thirdly, local screen capturing, namely capturing partial areas on the screen. However, in the prior art, only the obtained whole screen capture image can be edited, and only a plurality of screen capture images can be simply spliced.
In the embodiment of the invention, the user can freely splice the screenshots, such as: overlapping and splicing, and directly splicing; the screenshot can be spliced with other images (images in any preset application); the image to be spliced comprising a plurality of sub-images can be split, and the screenshot can be inserted at any position desired by the user. After the screenshots are spliced, the user can edit the whole spliced image, or can edit part of the sub-images (such as the inserted screenshots or one or more sub-images in the image to be spliced) in the spliced image. Therefore, the user operation is more convenient, and the spliced image really wanted by the user can be obtained.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a terminal according to a second embodiment of the present invention, where the terminal 110 includes:
a first receiving module 111, configured to receive a screen capture input for capturing at least a partial region in a current interface;
the intercepting module 112 is configured to determine, in response to the screen capture input, a screen capture selection area corresponding to the screen capture input, and intercept an image in the screen capture selection area to obtain an image to be processed;
a second receiving module 113, configured to receive a stitching input for stitching an image to be stitched and the image to be processed;
and the splicing module 114 is configured to splice the image to be spliced and the image to be processed in response to the splicing input, so as to obtain a spliced image.
By adopting the terminal provided by the embodiment of the invention, a user can freely select the screen capture selection area, and the image to be processed obtained by capturing the screen capture selection area is spliced with the image to be spliced, so that the spliced image desired by the user can be obtained, and different requirements of the user are met.
Preferably, the image to be stitched is an image captured before capturing the image to be processed.
Preferably, the terminal 110 further includes:
the third receiving module is used for receiving a starting input for starting the screen capture function;
and the display module is used for responding to the starting input and displaying a selection frame for determining the screen capture selection area and/or a display tool related to the image to be spliced on the current interface.
Preferably, the display module is configured to display a floating window on the current interface, where the image to be stitched is displayed in the floating window; or displaying a suspension button on the current interface, wherein the suspension button displays the thumbnail of the image to be spliced; alternatively, an adsorption button is displayed on the edge of the current interface.
Preferably, the terminal 110 further includes:
the first switching module is used for receiving a reduction input for reducing the floating window to a target proportion when the floating window is displayed on the current interface; switching the hover window to the hover button in response to the zoom-out input.
Preferably, the terminal 110 further includes:
the second switching module is used for displaying a floating window on the current interface or displaying a floating button, and if receiving dragging input for dragging the floating window or the floating button to the edge of the current interface; and responding to the dragging input, switching the floating window or the floating button into the adsorption button, and displaying the adsorption button on the edge of the current interface.
Preferably, the stitching input includes a drag input for dragging the image to be processed to the display tool;
the splicing module 114 is configured to display the display tool as a splicing window of a default size if the to-be-processed image is dragged to an area where the display tool is located and/or an area around the display tool, where the splicing window is the display tool or is obtained by switching the display tool.
Preferably, the stitching module 114 is configured to determine a stitching position of the image to be processed and the image to be stitched according to a termination position of the dragging input; and splicing the image to be spliced and the image to be processed according to the splicing position to obtain the spliced image, and displaying the spliced image in the splicing window.
Preferably, the image to be spliced is formed by splicing a plurality of sub-images;
the splicing module 114 is configured to split the image to be spliced into a first image block and a second image block if the image to be processed is dragged to a first area of the image to be spliced, where the first area is an area within a preset range of a spliced position of the first image block and the second image block, and each of the first image block and the second image block includes at least one sub-image; if the termination position of the dragging input is located between the first image block and the second image block, determining that the splicing position of the image to be processed is located between the first image block and the second image block; and inserting the image to be processed between the first image block and the second image block to obtain the spliced image.
Preferably, the stitching module 114 is configured to display a prompt identifier for assisting the user in stitching between the first image block and the second image block.
Preferably, the stitching module 114 is configured to receive an adjustment input for adjusting the image to be processed; adjusting the image to be processed in response to the adjustment input; and splicing the image to be spliced and the adjusted image to be processed according to the splicing position to obtain the spliced image.
Preferably, the terminal 110 further includes:
the fourth receiving module is used for receiving editing input used for editing the spliced image;
and the editing module is used for responding to the editing input and editing the spliced image to obtain a screen shot image.
Preferably, when the screen capture function is in the on state, the current interface further displays an operation button bar, where the operation button bar includes at least one of the following buttons: the image splicing method comprises an opening button used for controlling the display of the selection frame, a closing button used for controlling the disappearance of the selection frame, an enlarging button, a reducing button, a rotating button and a transparency adjusting button, wherein operation objects of the enlarging button, the reducing button, the rotating button and the transparency adjusting button are the images to be processed in the spliced images, the images to be spliced in the spliced images or the spliced images.
Preferably, the terminal 110 further includes:
and the control module is used for controlling the operation button strip to be adsorbed and displayed on the edge of the current interface if the operation button strip is dragged to the edge of the current interface.
The terminal provided in the embodiment of the present invention can implement each process in the method embodiments corresponding to fig. 1 to fig. 10, and is not described here again to avoid repetition.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a terminal 120 according to a third embodiment of the present invention, where the terminal 120 includes a processor 121, a memory 122, and a computer program stored in the memory 122 and capable of running on the processor 121, where the computer program implements the following steps when executed by the processor 121:
receiving screen capture input for capturing at least partial area in the current interface;
responding to the screen capture input, determining at least partial area corresponding to the screen capture input as a screen capture selection area, and capturing an image in the screen capture selection area to obtain an image to be processed;
receiving a stitching input for stitching an image to be stitched and the image to be processed;
and responding to the splicing input, splicing the image to be spliced and the image to be processed to obtain a spliced image.
By adopting the terminal provided by the embodiment of the invention, a user can freely select the screen capture selection area, and the image to be processed obtained by capturing the screen capture selection area is spliced with the image to be spliced, so that the spliced image desired by the user can be obtained, and different requirements of the user are met.
Preferably, the image to be stitched is an image captured before capturing the image to be processed.
Preferably, the computer program when executed by the processor 121 further implements the steps of:
before the step of receiving the screen capture input for capturing at least part of the area in the current interface, the method further comprises the following steps:
receiving a start input for starting a screen capture function;
and responding to the starting input, and displaying a selection frame for determining the screen capture selection area and/or a display tool associated with the images to be spliced on the current interface.
Preferably, the computer program when executed by the processor 121 further implements the steps of:
the step of displaying a display tool associated with the image to be spliced on the current interface comprises the following steps:
displaying a suspension window on a current interface, wherein the images to be spliced are displayed in the suspension window; or
Displaying a suspension button on a current interface, wherein thumbnails of the images to be spliced are displayed in the suspension button; or
And displaying an adsorption button on the edge of the current interface.
Preferably, when the current interface displays a floating window, the computer program when executed by the processor 121 further implements the following steps:
if a reduction input for reducing the floating window to a target proportion is received; switching the hover window to the hover button in response to the zoom-out input.
Preferably, when the current interface displays a floating window or a floating button, the computer program when executed by the processor 121 further implements the following steps:
if receiving a dragging input for dragging the floating window or the floating button to the edge of the current interface;
and responding to the dragging input, switching the floating window or the floating button into the adsorption button, and displaying the adsorption button on the edge of the current interface.
Preferably, the stitching input includes a drag input for dragging the image to be processed to the display tool;
the computer program when executed by the processor 121 may further implement the steps of:
the step of splicing the image to be spliced and the image to be processed in response to the splicing input to obtain a spliced image comprises the following steps:
and if the image to be processed is dragged to the area where the display tool is located and/or the peripheral area of the display tool, displaying the display tool as a splicing window with a default size, wherein the splicing window is the display tool or is obtained by switching the display tool.
Preferably, the computer program when executed by the processor 121 further implements the steps of:
the step of splicing the image to be spliced and the image to be processed in response to the splicing input to obtain a spliced image comprises the following steps:
determining the splicing position of the image to be processed and the image to be spliced according to the termination position of the dragging input;
and splicing the image to be spliced and the image to be processed according to the splicing position to obtain the spliced image, and displaying the spliced image in the splicing window.
Preferably, the image to be spliced is formed by splicing a plurality of sub-images;
the computer program when executed by the processor 121 may further implement the steps of:
the step of determining the splicing position of the image to be processed and the image to be spliced according to the termination position of the dragging input comprises the following steps:
splitting the image to be spliced into a first image block and a second image block if the image to be processed is dragged to a first area of the image to be spliced, wherein the first area is an area within a preset range of a splicing position of the first image block and the second image block, and the first image block and the second image block both comprise at least one sub-image;
if the termination position of the dragging input is located between the first image block and the second image block, determining that the splicing position of the image to be processed is located between the first image block and the second image block;
the step of splicing the image to be spliced and the image to be processed according to the splicing position to obtain a spliced image comprises the following steps:
and inserting the image to be processed between the first image block and the second image block to obtain the spliced image.
Preferably, the computer program when executed by the processor 121 further implements the steps of:
the step of splitting the image to be stitched into a first image block and a second image block comprises:
and displaying prompt identifiers used for assisting the splicing of the user between the first image block and the second image block.
Preferably, the computer program when executed by the processor 121 further implements the steps of:
the step of stitching the image to be stitched and the image to be processed in response to the stitching input to obtain a stitched image further comprises:
receiving an adjustment input for adjusting the image to be processed;
adjusting the image to be processed in response to the adjustment input;
and splicing the image to be spliced and the adjusted image to be processed according to the splicing position to obtain the spliced image.
Preferably, the computer program when executed by the processor 121 further implements the steps of:
after the step of stitching the image to be stitched and the image to be processed in response to the stitching input to obtain a stitched image, the method further comprises:
receiving an editing input for editing the stitched image;
and responding to the editing input, editing the spliced image to obtain a screen shot image.
Preferably, when the screen capture function is in the on state, the current interface further displays an operation button bar, where the operation button bar includes at least one of the following buttons: the image splicing method comprises an opening button used for controlling the display of the selection frame, a closing button used for controlling the disappearance of the selection frame, an enlarging button, a reducing button, a rotating button and a transparency adjusting button, wherein operation objects of the enlarging button, the reducing button, the rotating button and the transparency adjusting button are the images to be processed in the spliced images, the images to be spliced in the spliced images or the spliced images.
Preferably, the computer program when executed by the processor 121 further implements the steps of:
and if the operation button strip is dragged to the edge of the current interface, controlling the operation button strip to be adsorbed and displayed on the edge of the current interface.
The terminal can realize each process of the screen capturing method embodiment, and can achieve the same technical effect, and for avoiding repetition, the details are not repeated here.
Fig. 13 is a schematic diagram of a hardware structure of a terminal for implementing various embodiments of the present invention, where the terminal 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal configuration shown in fig. 13 is not intended to be limiting, and that the terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The user input unit 107 is used for receiving screen capture input for capturing at least part of the area in the current interface;
the processor 110 is configured to determine, in response to the screen capture input, that at least a partial region corresponding to the screen capture input is a screen capture selection region, and capture an image in the screen capture selection region to obtain an image to be processed;
the user input unit 107 is configured to receive a stitching input for stitching an image to be stitched and the image to be processed;
the processor 110 is configured to respond to the stitching input, and stitch the image to be stitched and the image to be processed to obtain a stitched image.
By adopting the terminal provided by the embodiment of the invention, a user can freely select the screen capture selection area, and the image to be processed obtained by capturing the screen capture selection area is spliced with the image to be spliced, so that the spliced image desired by the user can be obtained, and different requirements of the user are met.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse web pages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 8, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal 100 or may be used to transmit data between the terminal 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system.
In addition, the terminal 100 includes some functional modules that are not shown, and thus, the detailed description thereof is omitted.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the screen capturing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (13)
1. A screen capture method is applied to a terminal and is characterized by comprising the following steps:
receiving screen capture input for capturing at least partial area in the current interface;
responding to the screen capture input, determining at least partial area corresponding to the screen capture input as a screen capture selection area, and capturing an image in the screen capture selection area to obtain an image to be processed;
receiving a stitching input for stitching an image to be stitched and the image to be processed;
responding to the splicing input, and freely splicing the image to be spliced and the image to be processed to obtain a spliced image;
wherein, freely splicing specifically includes: directly splicing, overlapping and splicing, splitting an image to be spliced formed by splicing a plurality of sub-images, and inserting the image to be processed into the position between any two sub-images;
before the step of receiving the screen capture input for capturing at least part of the area in the current interface, the method further comprises the following steps:
receiving a start input for starting a screen capture function;
responding to the starting input, and displaying a selection frame for determining the screen capture selection area and a display tool associated with the image to be spliced on a current interface;
the splicing input comprises a dragging input for dragging the image to be processed to the display tool;
responding to the splicing input, freely splicing the image to be spliced and the image to be processed to obtain a spliced image, wherein the step of obtaining the spliced image comprises the following steps:
if the image to be processed is dragged to the area where the display tool is located and/or the area around the display tool, displaying the display tool as a splicing window with a default size, wherein the splicing window is the display tool or is obtained by switching the display tool, and the image to be spliced is displayed in the splicing window;
responding to the splicing input, freely splicing the image to be spliced and the image to be processed to obtain a spliced image, wherein the step of obtaining the spliced image comprises the following steps:
determining the splicing position of the image to be processed and the image to be spliced according to the termination position of the dragging input;
and splicing the image to be spliced and the image to be processed according to the splicing position to obtain the spliced image, and displaying the spliced image in the splicing window.
2. The screen capture method of claim 1, wherein the image to be stitched is an image captured before the image to be processed is captured.
3. The screen capturing method of claim 1, wherein the step of displaying a display tool associated with the image to be stitched on a current interface comprises:
displaying a suspension window on a current interface, wherein the images to be spliced are displayed in the suspension window; or
Displaying a suspension button on a current interface, wherein thumbnails of the images to be spliced are displayed in the suspension button; or
And displaying an adsorption button on the edge of the current interface.
4. The screen capture method of claim 1,
the step of determining the splicing position of the image to be processed and the image to be spliced according to the termination position of the dragging input comprises the following steps:
splitting the image to be spliced into a first image block and a second image block if the image to be processed is dragged to a first area of the image to be spliced, wherein the first area is an area within a preset range of a splicing position of the first image block and the second image block, and the first image block and the second image block both comprise at least one sub-image;
if the termination position of the dragging input is located between the first image block and the second image block, determining that the splicing position of the image to be processed is located between the first image block and the second image block;
the step of splicing the image to be spliced and the image to be processed according to the splicing position to obtain a spliced image comprises the following steps:
and inserting the image to be processed between the first image block and the second image block to obtain the spliced image.
5. The screen capturing method according to claim 4, wherein the step of splitting the image to be stitched into a first image block and a second image block comprises:
and displaying prompt identifiers used for assisting the splicing of the user between the first image block and the second image block.
6. The screen capture method of claim 1, wherein the step of freely stitching the image to be stitched and the image to be processed in response to the stitching input to obtain a stitched image further comprises:
receiving an adjustment input for adjusting the image to be processed;
adjusting the image to be processed in response to the adjustment input;
and splicing the image to be spliced and the adjusted image to be processed according to the splicing position to obtain the spliced image.
7. A terminal, comprising:
the first receiving module is used for receiving screen capture input for capturing at least partial area in the current interface;
the intercepting module is used for responding to the screen capturing input, determining a screen capturing selection area corresponding to the screen capturing input, and intercepting an image in the screen capturing selection area to obtain an image to be processed;
the second receiving module is used for receiving splicing input used for splicing the image to be spliced and the image to be processed;
the splicing module is used for responding to the splicing input and freely splicing the image to be spliced and the image to be processed to obtain a spliced image;
wherein, freely splicing specifically includes: directly splicing, overlapping and splicing, splitting an image to be spliced formed by splicing a plurality of sub-images, and inserting the image to be processed into the position between any two sub-images;
further comprising:
the third receiving module is used for receiving a starting input for starting the screen capture function;
the display module is used for responding to the starting input and displaying a selection frame for determining the screen capture selection area and a display tool related to the image to be spliced on a current interface;
the splicing input comprises a dragging input for dragging the image to be processed to the display tool;
the splicing module is used for displaying the display tool as a splicing window with a default size if the image to be processed is dragged to the area where the display tool is located and/or the area around the display tool, wherein the splicing window is the display tool or is obtained by switching the display tool, and the image to be spliced is displayed in the splicing window;
the splicing module is used for determining the splicing position of the image to be processed and the image to be spliced according to the termination position of the dragging input; and splicing the image to be spliced and the image to be processed according to the splicing position to obtain the spliced image, and displaying the spliced image in the splicing window.
8. The terminal according to claim 7, wherein the image to be stitched is an image captured before capturing the image to be processed.
9. The terminal of claim 7,
the display module is used for displaying a suspension window on the current interface, and the images to be spliced are displayed in the suspension window; or displaying a suspension button on the current interface, wherein the suspension button displays the thumbnail of the image to be spliced; alternatively, an adsorption button is displayed on the edge of the current interface.
10. The terminal of claim 7,
the splicing module is configured to split the image to be spliced into a first image block and a second image block if the image to be processed is dragged to a first area of the image to be spliced, where the first area is an area within a preset range of a splicing position of the first image block and the second image block, and the first image block and the second image block both include at least one sub-image; if the termination position of the dragging input is located between the first image block and the second image block, determining that the splicing position of the image to be processed is located between the first image block and the second image block; and inserting the image to be processed between the first image block and the second image block to obtain the spliced image.
11. The terminal of claim 10,
and the splicing module is used for displaying a prompt identifier for assisting the splicing of the user between the first image block and the second image block.
12. The terminal of claim 7,
the splicing module is used for receiving an adjustment input for adjusting the image to be processed; adjusting the image to be processed in response to the adjustment input; and splicing the image to be spliced and the adjusted image to be processed according to the splicing position to obtain the spliced image.
13. A terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the screen capturing method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811564821.2A CN109656461B (en) | 2018-12-20 | 2018-12-20 | Screen capturing method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811564821.2A CN109656461B (en) | 2018-12-20 | 2018-12-20 | Screen capturing method and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109656461A CN109656461A (en) | 2019-04-19 |
CN109656461B true CN109656461B (en) | 2020-09-22 |
Family
ID=66115456
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811564821.2A Active CN109656461B (en) | 2018-12-20 | 2018-12-20 | Screen capturing method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109656461B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110209456A (en) * | 2019-05-31 | 2019-09-06 | 努比亚技术有限公司 | Method, mobile terminal and the computer readable storage medium of the long screenshot of screen interface |
CN110502293B (en) * | 2019-07-10 | 2022-02-01 | 维沃移动通信有限公司 | Screen capturing method and terminal equipment |
CN110568973B (en) * | 2019-09-09 | 2021-04-06 | 网易(杭州)网络有限公司 | Screenshot method, screenshot device, storage medium and terminal equipment |
CN111857505B (en) * | 2020-07-16 | 2022-07-05 | Oppo广东移动通信有限公司 | Display method, device and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104658017A (en) * | 2015-03-20 | 2015-05-27 | 苏州首旗信息科技有限公司 | Picture processing software for mobile phone |
CN104899832A (en) * | 2015-06-23 | 2015-09-09 | 上海卓易科技股份有限公司 | Splicing screenshot method of mobile terminal and splicing screenshot device |
CN105094617A (en) * | 2015-08-24 | 2015-11-25 | 北京锤子数码科技有限公司 | Screen capturing method and device |
CN106127676A (en) * | 2016-06-17 | 2016-11-16 | 许之敏 | A kind of quick intercepting and the method for the synthesis long sectional drawing of multi-screen |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102779008B (en) * | 2012-06-26 | 2016-06-22 | 北京奇虎科技有限公司 | A kind of screenshot method and system |
CN105278824B (en) * | 2014-07-31 | 2018-06-26 | 维沃移动通信有限公司 | The screenshotss method and its terminal device of a kind of terminal device |
TWI536363B (en) * | 2015-03-31 | 2016-06-01 | 建碁股份有限公司 | tiling-display system and method thereof |
CN105549845B (en) * | 2015-12-09 | 2019-03-26 | 惠州Tcl移动通信有限公司 | A kind of continuous screenshot method of page based on mobile terminal, system and mobile terminal |
CN106775301A (en) * | 2016-11-29 | 2017-05-31 | 珠海市魅族科技有限公司 | The screenshot method and terminal device of a kind of terminal |
CN107577399A (en) * | 2017-08-24 | 2018-01-12 | 上海与德科技有限公司 | A kind of picture joining method and device |
-
2018
- 2018-12-20 CN CN201811564821.2A patent/CN109656461B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104658017A (en) * | 2015-03-20 | 2015-05-27 | 苏州首旗信息科技有限公司 | Picture processing software for mobile phone |
CN104899832A (en) * | 2015-06-23 | 2015-09-09 | 上海卓易科技股份有限公司 | Splicing screenshot method of mobile terminal and splicing screenshot device |
CN105094617A (en) * | 2015-08-24 | 2015-11-25 | 北京锤子数码科技有限公司 | Screen capturing method and device |
CN106127676A (en) * | 2016-06-17 | 2016-11-16 | 许之敏 | A kind of quick intercepting and the method for the synthesis long sectional drawing of multi-screen |
Also Published As
Publication number | Publication date |
---|---|
CN109656461A (en) | 2019-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108536365B (en) | Image sharing method and terminal | |
CN109495711B (en) | Video call processing method, sending terminal, receiving terminal and electronic equipment | |
CN108471498B (en) | Shooting preview method and terminal | |
CN108495029B (en) | Photographing method and mobile terminal | |
CN109656461B (en) | Screen capturing method and terminal | |
CN110007837B (en) | Picture editing method and terminal | |
WO2021104321A1 (en) | Image display method and electronic device | |
WO2020156169A1 (en) | Display control method and terminal device | |
CN110851040B (en) | Information processing method and electronic equipment | |
CN110321044A (en) | Sharing files method and terminal | |
WO2019184947A1 (en) | Image viewing method and mobile terminal | |
WO2020238497A1 (en) | Icon moving method and terminal device | |
CN111031398A (en) | Video control method and electronic equipment | |
CN108646960B (en) | File processing method and flexible screen terminal | |
CN109683764B (en) | Icon management method and terminal | |
CN108132749B (en) | Image editing method and mobile terminal | |
CN110196668B (en) | Information processing method and terminal equipment | |
CN110209331A (en) | Information cuing method and terminal | |
WO2021143642A1 (en) | Image cropping method and electronic device | |
WO2021073579A1 (en) | Method for capturing scrolling screenshot and terminal device | |
CN110442279B (en) | Message sending method and mobile terminal | |
CN110968229A (en) | Wallpaper setting method and electronic equipment | |
CN109413333B (en) | Display control method and terminal | |
CN109388324B (en) | Display control method and terminal | |
CN108804628B (en) | Picture display method and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |