CN109683777B - Image processing method and terminal equipment - Google Patents

Image processing method and terminal equipment Download PDF

Info

Publication number
CN109683777B
CN109683777B CN201811554757.XA CN201811554757A CN109683777B CN 109683777 B CN109683777 B CN 109683777B CN 201811554757 A CN201811554757 A CN 201811554757A CN 109683777 B CN109683777 B CN 109683777B
Authority
CN
China
Prior art keywords
image
sub
input
sliding
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811554757.XA
Other languages
Chinese (zh)
Other versions
CN109683777A (en
Inventor
朱宇轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811554757.XA priority Critical patent/CN109683777B/en
Publication of CN109683777A publication Critical patent/CN109683777A/en
Application granted granted Critical
Publication of CN109683777B publication Critical patent/CN109683777B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images

Abstract

The invention provides an image processing method and terminal equipment, wherein the method comprises the following steps: receiving N sliding inputs of a user; in response to the N sliding inputs, generating M sub-images, wherein the image contents of the M sub-images are associated with the sliding tracks of the N sliding inputs; receiving first input of a user to the M sub-images; responding to the first input, splicing the M sub-images according to the splicing parameters selected by the first input, and outputting a target spliced image; wherein M, N are all positive integers. Therefore, the user can intercept the content displayed on the display screen according to the sliding track formed by the sliding input through the sliding input to obtain the subimages, the user can customize the splicing parameters, the diversity and the interestingness of obtaining the subimages are increased, and meanwhile, the operation of the user in the process of obtaining the spliced images is simplified due to the simple operation of the process of obtaining the subimages.

Description

Image processing method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image processing method and terminal equipment.
Background
At present, screen capture operation is used by more and more people, and the frequency of use is also higher and higher, and the user can share terminal equipment's screen content fast through screen capture operation, especially under the condition that can not use the copy to paste, adopt the mode of screen capture to obtain screen content not only swift but also convenient.
In the prior art, the screen capture can only capture the content of a part of screen or the whole screen, or a plurality of screen capture pictures are spliced into a long screen capture image. If a user wants to randomly splice multiple screen-captured images into one image, the user needs to acquire the multiple screen-captured images first and then splice the multiple screen-captured images in image processing software.
Disclosure of Invention
The embodiment of the invention provides an image processing method and terminal equipment, and aims to solve the problems of complex operation and inconvenience in use of the existing screen capture image splicing mode.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, including:
receiving N sliding inputs of a user;
in response to the N sliding inputs, generating M sub-images, wherein the image contents of the M sub-images are associated with the sliding tracks of the N sliding inputs;
receiving first input of a user to the M sub-images;
responding to the first input, splicing the M sub-images according to the splicing parameters selected by the first input, and outputting a target spliced image;
wherein M, N are all positive integers.
In a second aspect, an embodiment of the present invention further provides a terminal device, including:
the first receiving module is used for receiving N sliding inputs of a user;
a first response module, configured to generate M sub-images in response to the N sliding inputs, where image contents of the M sub-images are associated with sliding tracks of the N sliding inputs;
the second receiving module is used for receiving first input of the user to the M sub-images;
the output module is used for responding to the first input, splicing the M sub-images according to the splicing parameters selected by the first input and outputting a target spliced image;
wherein M, N are all positive integers.
In a third aspect, an embodiment of the present invention further provides a terminal device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the image processing method.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the steps of the image processing method.
In the embodiment of the invention, N sliding inputs of a user are received; in response to the N sliding inputs, generating M sub-images, wherein the image contents of the M sub-images are associated with the sliding tracks of the N sliding inputs; receiving first input of a user to the M sub-images; responding to the first input, splicing the M sub-images according to the splicing parameters selected by the first input, and outputting a target spliced image; wherein M, N are all positive integers. Therefore, the user can intercept the content displayed on the display screen according to the sliding track formed by the sliding input through the sliding input to obtain the subimages, the user can customize the splicing parameters, the diversity and the interestingness of obtaining the subimages are increased, and meanwhile, the operation of the user in the process of obtaining the spliced images is simplified due to the simple operation of the process of obtaining the subimages.
Drawings
FIG. 1 is a flow chart of an image processing method provided by an embodiment of the invention;
FIG. 2 is a second flowchart of an image processing method according to an embodiment of the present invention;
1-1 to-1-26 are display diagrams of display screens of terminal devices when implementing the image processing method;
FIG. 3 is a flow chart of an image processing method provided by an embodiment of the invention;
fig. 4 is a block diagram of a terminal device according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, and as shown in fig. 1, an image processing method according to an embodiment of the present invention is applied to a terminal device, and includes the following steps:
step 101, receiving N sliding inputs of a user, wherein N is a positive integer.
The slide input is an input constituted by a user sliding on a display screen of the terminal device.
When a user needs to perform user-defined interception on content displayed on a display screen of the terminal equipment, the terminal equipment is firstly set to be in a user-defined splicing screen capturing mode, the user-defined splicing screen capturing mode can be started through an entity key or a virtual key, for example, the display screen shown in figure 1-1, the user starts the user-defined splicing screen capturing mode by clicking the virtual key shown by a label 1, the terminal equipment entering the user-defined splicing screen capturing mode can receive N sliding inputs of the user, and in the user-defined splicing screen capturing mode, the user can capture the content displayed on the display screen of the terminal equipment, including characters, images, a desktop or an application program interface displayed on the display screen. As shown in fig. 1-2, the sliding track formed by the sliding input of the user on the display screen is heart-shaped.
Or, as shown in the display screens shown in fig. 1 to 3, the user starts the photographing screen capture mode by clicking the virtual key shown by the reference numeral 2, the terminal device entering the photographing screen capture mode receives N sliding inputs of the user, and in the photographing screen capture mode, the user can capture an image displayed on the display screen of the terminal device or an image obtained by photographing. As shown in fig. 1 to 4, a user acquires an image through a shooting operation on a display screen of a terminal device, reference numeral 11 is a picture acquired through the shooting operation, as shown in fig. 1 to 5, a sliding track formed by a sliding input of the user on the image is heart-shaped, reference numeral 12 is the sliding track, and reference numeral 13 is a thumbnail of a sub-image acquired after being intercepted according to the sliding track.
And 102, responding to the N sliding inputs, and generating M sub-images, wherein the image contents of the M sub-images are associated with the sliding tracks of the N sliding inputs. M is a positive integer.
The sub-image is an image of an area surrounded by a closed pattern formed by a sliding track input in each sliding in the target image. The sliding track of the sliding input may be any shape of sliding track, for example, a quadrangle, a circle, an irregular polygon, and so on. When the terminal equipment responds to the sliding input, the content displayed on the display screen is intercepted according to the sliding track of the sliding input to obtain the sub-image. As shown in fig. 1 to 6, the terminal device intercepts the content displayed on the display screen according to a sliding track 13 formed by the sliding input, and obtains a sub-image, and a thumbnail 14 of the sub-image is displayed at the bottom end of the terminal device. The terminal device receives the sliding input of the user again as shown in fig. 1-7, and the sliding track 16 formed by the sliding input at this time is a star shape. As shown in fig. 1-8, the terminal device intercepts content displayed on the display screen according to a sliding track 16 formed by the sliding input, where the sliding track displayed during intercepting may be a dotted line, as shown by reference numeral 17, and obtains a sub-image, and a thumbnail 18 of the sub-image is displayed at the bottom end of the terminal device.
As shown in fig. 1 to 9, the terminal device intercepts the content displayed on the display screen again according to the sliding track 19 formed by the sliding input, and obtains a sub-image, and the thumbnail 20 of the sub-image is displayed at the bottom end of the terminal device. As shown in fig. 1-10, the terminal device intercepts 21 the content displayed on the display screen according to the sliding track formed by the sliding input, and obtains a sub-image, and the thumbnail 22 of the sub-image is displayed at the bottom end of the terminal device.
And if the sliding track is a closed pattern, intercepting the content displayed on the display screen according to the closed pattern. If the sliding track is not a closed pattern, when the distance between the starting point position and the end point position of the sliding track does not exceed the preset distance threshold, the starting point position and the end point position of the sliding track may be considered to be coincident, the terminal device may connect the starting point position and the end point position in an interpolation manner, or the terminal device moves the starting point position of the sliding track to the end point position, or the terminal device moves the end point position of the sliding track to the starting point position, and a specific implementation manner may be flexibly selected, which is not limited herein. The sliding track can be a closed pattern in the above mode, and the terminal device intercepts the content on the display screen according to the obtained closed pattern. When intercepting, what is intercepted is the content on the display screen that is enclosed by the closed pattern.
And displaying the content to be intercepted on a display screen, wherein the content to be intercepted comprises characters, images, a desktop or an application program interface displayed on the display screen. And then, performing sliding input on the display screen to enclose the content to be intercepted in a closed pattern formed by the sliding track so as to generate a sub-image. The sub-image is an image obtained by cutting out the content displayed on the display screen according to the sliding track.
The sliding track formed by the sliding input can be displayed on the display screen, for example, the sliding track is drawn by red or black lines, which is convenient for the user to view. After the sub-image is generated, the sliding track can be kept displayed on the display screen, and the user can drag the sliding track to other positions of the display screen to obtain the sub-image again, so that the time for the user to perform sliding input again to form the sliding track is saved, and the operation process of obtaining the sub-image is simplified. Meanwhile, the user can intercept the content displayed on the display screen according to the sliding track by sliding on the display screen, so that the diversity and the interestingness of screen interception are increased.
And 103, receiving a first input of the M sub-images by a user.
After the M sub-images are acquired, the terminal device receives first input of a user to the M sub-images, wherein the first input is used for setting one or more of the splicing sequence, the splicing position or the splicing shape of each sub-image. The first input may be a click, a slide, a long press, a heavy press, and the like, which is not limited herein.
And 104, responding to the first input, splicing the M sub-images according to the splicing parameters selected by the first input, and outputting a target spliced image.
The splicing parameters comprise at least one item of splicing sequence, splicing position or splicing shape, the splicing parameters are selected through first input, and the terminal equipment splices the M sub-images according to the splicing parameters and outputs a target spliced image.
In an embodiment of the present invention, the terminal Device may be a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a Wearable Device (Wearable Device), or the like.
The image processing method of the embodiment of the invention receives N sliding inputs of a user; in response to the N sliding inputs, generating M sub-images, wherein the image contents of the M sub-images are associated with the sliding tracks of the N sliding inputs; receiving first input of a user to the M sub-images; responding to the first input, splicing the M sub-images according to the splicing parameters selected by the first input, and outputting a target spliced image; wherein M, N are all positive integers. Therefore, the user can intercept the content displayed on the display screen according to the sliding track formed by the sliding input through the sliding input to obtain the subimages, the user can customize the splicing parameters, the diversity and the interestingness of obtaining the subimages are increased, and meanwhile, the operation of the user in the process of obtaining the spliced images is simplified due to the simple operation of the process of obtaining the subimages.
Optionally, as shown in fig. 2, step 102, generating M sub-images in response to the N sliding inputs, includes:
step 1021, determining a closed pattern formed by a sliding track of the ith sliding input for the ith sliding input; wherein i is a positive integer, and i is not more than N.
The sliding track of the sliding input may be any shape of sliding track, for example, a quadrangle, a circle, an irregular polygon, and so on.
And if the sliding track is a closed pattern, intercepting the content displayed on the display screen according to the closed pattern. If the sliding track is not a closed pattern, when the distance between the starting point position and the end point position of the sliding track does not exceed the preset distance threshold, the starting point position and the end point position of the sliding track may be considered to be coincident, the terminal device may connect the starting point position and the end point position in an interpolation manner, or the terminal device moves the starting point position of the sliding track to the end point position, or the terminal device moves the end point position of the sliding track to the starting point position, and a specific implementation manner may be flexibly selected, which is not limited herein.
Step 1022, extracting an image of an area surrounded by a closed pattern formed by the sliding trajectory input in the ith sliding in the target image, and generating an ith sub-image.
The target image comprises characters, images, a desktop or an application program interface displayed on the display screen, for example, when the desktop is displayed on the display screen, an area surrounded by a closed pattern formed by a sliding track of the ith sliding input in the desktop is extracted, and an ith sub-image is generated according to the extracted area.
According to the image processing method, for the ith sliding input, a closed pattern formed by the sliding track of the ith sliding input is determined; extracting an image of an area surrounded by a closed pattern formed by a sliding track input for the ith sliding in the target image to generate an ith sub-image; wherein i is a positive integer, and i is not more than N. Therefore, the user can extract the content displayed on the display screen according to the sliding track formed by the sliding input through the sliding input, so that the sub-image is obtained, the user can customize the splicing parameters, and the diversity and the interestingness of obtaining the sub-image are increased.
Optionally, after step 1021 and before step 1022, the method further includes: determining an image currently displayed on a display screen as a target image; or controlling the camera to shoot an image, and determining the image shot by the camera as the target image.
Determining the current image displayed on the display screen as a target image, wherein the current image can be understood as a target image which is displayed on the display screen by characters, a desktop or an application program interface; or opening an image and displaying the image in the display screen, wherein the opened image displayed in the display screen is the target image; or opening the camera, and displaying a preview picture in the display screen, wherein the displayed preview picture is the target image. Besides the above manner of determining the target image, the camera may be controlled to take one image, and the image taken by the camera may be determined as the target image. As shown in fig. 1 to 17, the target image is a preview screen, and when the user presses a shooting key, the terminal device intercepts the preview screen according to a slide track 23 of a slide input. The intercepted display screen is shown in fig. 1-18, the display screen displays the sub-image 24 obtained by the interception, and the bottom end of the display screen displays the thumbnail image 25 of the obtained sub-image.
The mode of determining the target image is simple and convenient, and the method is convenient for extracting various contents displayed in the display screen and meets various requirements of users.
Optionally, in step 101, the receiving N sliding inputs of the user includes: receiving a kth sliding input of a user, wherein the kth sliding input is used for dividing a first closed pattern into two closed patterns; the first closed pattern is formed by a sliding track of any sliding input from the 1 st sliding input to the k-1 st sliding input.
In this step, the sliding trajectory further includes a curve, i.e., a case where the sliding trajectory is not a closed pattern, and a distance between a start position and an end position of the sliding trajectory exceeds a preset distance threshold.
The first closed pattern is a closed pattern formed by the sliding input before the kth sliding input, namely the first closed pattern is a closed pattern formed by the sliding track of any sliding input from the 1 st sliding input to the k-1 st sliding input, wherein k is less than or equal to N. In this step, the kth slide input by the user may divide the first closed pattern into two closed patterns.
Step 102, generating M sub-images in response to the N sliding inputs, including:
determining a second closed pattern and a third closed pattern based on the first closed pattern and the sliding track of the kth sliding; extracting an image of an area surrounded by the second closed pattern in the target image to generate a first sub-image; and extracting an image of an area surrounded by the third closed pattern in the target image to generate a second sub-image.
The second closed pattern and the third closed pattern are constituted by the first closed pattern and the sliding locus of the kth sliding. For example, if the first closed pattern is a heart shape, the sliding track formed by the kth sliding input is a line segment, the line segment divides the heart shape into two closed patterns, the outer contour line of the heart shape and the sliding track form two closed patterns, the two closed patterns are the second closed pattern and the third closed pattern, and the specific process can be seen from fig. 1-20 to fig. 1-23, and one sub-image is intercepted to obtain schematic diagrams of two sub-images.
After the second closed pattern and the third closed pattern are determined, an image of an area surrounded by the second closed pattern in the target image is respectively extracted to generate a first sub-image, an image of an area surrounded by the third closed pattern in the target image is extracted to generate a second sub-image.
1-19, FIGS. 1-19 show a user selecting a custom splice. As shown in fig. 1-20, when the user selects the thumbnail image 25 of the sub-image displayed on the display screen, the sub-image 24 is displayed in the display screen. As shown in fig. 1-21 to fig. 1-22, the user cuts the sub-image 24 again, and the two sliding tracks shown as reference numerals 241 and 242 are used to obtain two new sub-images, and the thumbnails of the new sub-images, such as the thumbnails shown as reference numerals 26 and 27, are displayed at the lower end of the display screen. After the interception is complete, the default background is displayed on the display screen, as shown in FIGS. 1-23.
In the embodiment, the first closed pattern can be divided to obtain the two closed patterns, the closed pattern obtaining mode is convenient and fast, the user can customize the splicing parameters, and the diversity and the interestingness of obtaining the sub-images are improved.
Optionally, after determining the second closed pattern and the third closed pattern based on the first closed pattern and the sliding trajectory of the kth sliding, the method further includes:
receiving a second input of the user; acquiring an image of an area surrounded by the second closed pattern in the first target image in response to the second input;
receiving a third input of the user; acquiring an image of an area in a second target image enclosed by the third enclosing pattern in response to the third input;
generating a third sub-image based on an image of a region in the first target image surrounded by the second closed pattern and an image of a region in the second target image surrounded by the third closed pattern.
Specifically, the target image includes a first target image and a second target image. The second input and the third input may be a click operation, a long press operation, or a re-press operation, etc. And when the terminal equipment receives a second input of the user, extracting the content in the first target image according to the second closed pattern, so as to obtain an image of an area surrounded by the second closed pattern in the first target image.
Similarly, when the terminal device receives a third input of the user, the content in the second target image is extracted according to the third closed pattern, so that an image of an area surrounded by the third closed pattern in the second target image is acquired.
A third sub-image is generated based on the image of the area in the first target image enclosed by the second enclosing pattern and the image of the area in the second target image enclosed by the third enclosing pattern, for example, the third sub-image may be a stitched image of the area in the second target image enclosed by the second enclosing pattern and the image of the area in the second target image enclosed by the third enclosing pattern.
In this embodiment, the first target image is extracted by respectively adopting the second closed pattern, the second target image is extracted by adopting the third closed pattern, a plurality of sub-images are obtained, and finally the sub-images are spliced to obtain the third sub-image, so that the obtained third sub-image can comprise sub-images from different target images, the sub-image obtaining mode is more diversified, and the interest of obtaining the sub-images is increased.
Optionally, after the step 101 of receiving N sliding inputs of the user, the method further includes: and displaying the thumbnail of each sub-image in the M sub-images.
The acquired M sub-images can be displayed at the designated position of the display screen in a thumbnail mode so as to be convenient for a user to view; the designated position includes at least one of a top end, a bottom end, a left side, and a right side of the display screen. The user may also delete the corresponding sub-image by deleting the image in the thumbnail.
After N sliding inputs of the user are received, if it is detected that the user presses the image in the thumbnail for a long time, as shown in fig. 1-11, the terminal device pops up a deletion option, and the user can delete the sub-image corresponding to the thumbnail by selecting the deletion option.
Optionally, the first input includes a first sub-input and a second sub-input. Step 104, in response to the first input, stitching the M sub-images according to the stitching parameters selected by the first input, and outputting a target stitched image, where the step includes:
determining a target splicing order of the M sub-images in response to the first sub-input; determining a target stitching template in response to the second sub-input; and splicing the M sub-images according to the target splicing template based on the target splicing sequence, and outputting a target splicing image.
In this step, the first sub-input may be operations such as clicking, sliding, long pressing, and re-pressing, which is not limited herein. And determining the target splicing sequence of the M sub-images through the first sub-input. For example, a user clicks the M sub-images, the target splicing sequence of each sub-image is determined according to the sequence of the clicks, and the first sub-input is a click operation. Furthermore, when the thumbnails corresponding to the sub-images are displayed on the display screen, the target splicing sequence of the sub-images can be determined by clicking the thumbnails corresponding to the sub-images, and the method for determining the target splicing sequence is simple and intuitive.
The second sub-input may be an operation combining clicking and selecting, for example, a user clicks a display screen, a selection list is popped up, the selection list includes a plurality of splicing templates, the user selects one splicing template in the selection list, and the splicing module selected by the user is the target splicing template.
The splicing template is provided with sub-image positions, for example, for a splicing template spliced by a nine-square grid, the splicing template comprises three rows and three columns of sub-image position relations, at this time, each image in the M sub-images is sequentially arranged according to the splicing sequence according to the three rows and three columns of position relations, for example, the images with the splicing sequence of 1, 2 and 3 in the M sub-images are sequentially arranged in a first row from left to right; the M sub-images with the splicing sequence of 4, 5 and 6 are sequentially arranged in a second row from left to right; the images with the splicing sequence of 7, 8 and 9 in the M sub-images are arranged in the third row from left to right.
The splicing template can also be used for transverse splicing and vertical splicing. The horizontal stitching may enable the sub-images to be arranged horizontally, and the vertical stitching may enable the images to be arranged vertically. Of course, other templates for arranging the sub-images may be used as the stitching template, and are not limited herein. As shown in fig. 1-12, the splicing templates available in the drawings include horizontal splicing, vertical splicing, squared figure splicing and custom splicing. FIGS. 1-13 illustrate the effect of the horizontal tiling display; FIGS. 1-14 illustrate vertically tiled display effects; fig. 1-15 show the display effect of the squared figure mosaic. FIGS. 1-16 illustrate the display effect of custom stitching.
After the target splicing template and the target splicing sequence are determined, splicing the M sub-images, and outputting a target spliced image. The splicing mode can be quickly determined for each image in the M sub-images, so that a target spliced image is obtained, and the operation of a user in the process of obtaining the target spliced image is simplified.
Optionally, the splicing template can also be spliced by user. Under the condition of self-defined splicing, the splicing template can be freely set by a user according to requirements.
For example, in the case of a custom splice, the first sub-input may also be a drag operation. And the terminal equipment displays the thumbnails of the sub-images in the M sub-images at the specified positions, wherein the specified positions comprise at least one of the top end, the bottom end, the left side and the right side of the display screen. And the thumbnail corresponding to the first sub-image is an operation object of the dragging operation in the M sub-images. When the user drags the thumbnail corresponding to the first sub-image from the designated position to other positions of the display screen, the end position of the dragging operation movement can be regarded as the position where the user wants to place the first sub-image, and at the moment, the first sub-image is displayed at the end position of the dragging operation movement. As shown in fig. 1 to 24, the displayed position of the star-shaped sub-image 30 is the position moved by the user through the drag operation, where reference numeral 29 is the drag operation movement track, and reference numeral 28 is the thumbnail of the sub-image 30. In the case of custom stitching, the user may move the sub-images such that one sub-image is superimposed on another sub-image, i.e., the two sub-images may be allowed to overlap. As shown in fig. 1 to 25, the sub-image 24 is stacked on the sub-image 30, the sub-image 34 and the sub-image 35 are stacked on the sub-image 33, wherein the thumbnail 25 is a thumbnail of the sub-image 24, the thumbnail 32 is a thumbnail of the sub-image 30, the thumbnail 31 is a thumbnail of the sub-image 33, the thumbnail 26 is a thumbnail of the sub-image 35, and the thumbnail 27 is a thumbnail of the sub-image 34, a user performs custom dragging on each thumbnail to drag the sub-image corresponding to each thumbnail to a designated position, then the user clicks the thumbnail, and the mobile terminal splices the sub-images.
Optionally, the user may drag the displayed first sub-image to drag the first sub-image to another position of the display screen.
And finally, the user can display the sub-images at the moving end point position of the dragging operation by dragging the thumbnails of the sub-images of the M sub-images, and finally, the sub-images are spliced according to the display positions of the sub-images. The splicing mode is simple and intuitive.
Optionally, the first input comprises a third sub-input; step 104, in response to the first input, stitching the M sub-images according to the stitching parameters selected by the first input, and outputting a target stitched image, further including:
updating display of at least one target sub-image of the M sub-images in response to the third sub-input; and splicing the M sub-images based on the updated at least one target sub-image, and outputting a target spliced image.
Specifically, the third sub-input is a zoom-in operation, a zoom-out operation, a move operation, or the like for at least one target sub-image among the M sub-images.
Optionally, the updating the display of at least one target sub-image in the M sub-images includes at least one of:
moving at least one target sub-image of the M sub-images; magnifying at least one target sub-image of the M sub-images; reducing at least one target sub-image in the M sub-images; and adjusting the display angle of at least one target sub-image in the M sub-images.
Specifically, when the third sub-input is a zoom-in operation on at least one target sub-image of M sub-images, the display of at least one target sub-image of the M sub-images is updated as follows: magnifying at least one target sub-image of the M sub-images.
When the third sub-input is a zoom-out operation on at least one target sub-image in the M sub-images, updating the display of at least one target sub-image in the M sub-images to: and reducing at least one target sub-image in the M sub-images.
When the third sub-input is a move operation on at least one target sub-image in the M sub-images, updating the display of at least one target sub-image in the M sub-images to: and moving at least one target sub-image in the M sub-images, or adjusting the display angle of at least one target sub-image in the M sub-images.
The size, the image placement position or the image placement angle of at least one target sub-image in the M sub-images can be adjusted through the third sub-input, and the terminal device updates the size, the image placement position or the image placement angle of at least one target sub-image in the M sub-images in response to the third sub-input, so as to dynamically display the change of at least one target sub-image in the M sub-images on the terminal device.
Through the operation, the user can change the position, size or angle of the target sub-image through at least one operation of moving, enlarging, reducing and adjusting instead of selecting the existing splicing template (such as Sudoku splicing, transverse splicing or vertical splicing) in the mobile terminal, so that the position, size or angle of any target sub-image can be customized, and the splicing mode is more flexible. When the user zooms in at least one target sub-image in the M sub-images, for example, the two fingers slide reversely on the target sub-image, and the target sub-image is zoomed in, at this time, the terminal device updates the display of at least one target sub-image in the M sub-images, that is, displays the zoomed target sub-image. And based on the updated at least one target sub-image, splicing the M sub-images, and outputting a target spliced image, namely, the terminal device splices the M sub-images again according to the updated at least one target sub-image, and outputs the target spliced image.
In this embodiment, when receiving a third sub-input of the user to at least one target sub-image of the M sub-images, the terminal device updates display of the at least one target sub-image of the M sub-images to dynamically display a change of the at least one target sub-image on the terminal device, and meanwhile, based on the updated at least one target sub-image, splices the M sub-images and outputs a target spliced image, so that the terminal device can dynamically display the target spliced image according to the third input of the user.
Optionally, in step 104, the splicing the M sub-images according to the splicing parameter selected by the first input, and outputting a target spliced image, includes:
acquiring a target background image; and responding to the first input, splicing the M sub-images and the target background image according to the splicing parameters selected by the first input, and outputting a target spliced image taking the target background image as the background.
The target background image can be obtained by shooting through a camera or by selecting from an album, which is not limited herein, as shown in fig. 1-26.
And when the M sub-images are spliced with the target background image, splicing the M sub-images by taking the target background image as a background according to the splicing parameters.
In the embodiment, a target background image is obtained; and splicing the M sub-images and the target background image on the target background image according to the splicing parameter selected by the first input, wherein the target spliced image obtained in the way takes the target background image as the background, so that more diversified target spliced images can be provided for a user, namely, the user can define the background of the spliced image to meet more requirements of the user.
Referring to fig. 3, fig. 3 is a structural diagram of a terminal device according to an embodiment of the present invention, and as shown in fig. 3, a terminal device 400 according to an embodiment of the present invention includes:
a first receiving module 401, configured to receive N sliding inputs of a user;
a first response module 402, configured to generate M sub-images in response to the N sliding inputs, where image contents of the M sub-images are associated with sliding tracks of the N sliding inputs;
a second receiving module 403, configured to receive a first input of the M sub-images from a user;
an output module 404, configured to respond to the first input, splice the M sub-images according to the splicing parameter selected by the first input, and output a target spliced image;
wherein M, N are all positive integers.
Optionally, the first response module 402 includes:
the closed pattern acquisition submodule is used for determining a closed pattern formed by a sliding track of the ith sliding input for the ith sliding input;
the sub-image generation sub-module is used for extracting an image of an area surrounded by a closed pattern formed by a sliding track input for the ith sliding in the target image and generating an ith sub-image; wherein i is a positive integer, and i is not more than N.
Optionally, the terminal device further includes a target image determining module, where the target image determining module is configured to: determining an image currently displayed on a display screen as a target image; or controlling the camera to shoot an image, and determining the image shot by the camera as the target image.
Optionally, the first receiving module 401 is configured to receive a kth sliding input of the user, where the kth sliding input is used to divide the first closed pattern into two closed patterns; the first closed pattern is a closed pattern formed by a sliding track of any sliding input from the 1 st sliding input to the k-1 st sliding input;
a first response module 402, configured to determine a second close pattern and a third close pattern based on the first close pattern and the sliding trajectory of the kth sliding; extracting an image of an area surrounded by the second closed pattern in the target image to generate a first sub-image; and extracting an image of an area surrounded by the third closed pattern in the target image to generate a second sub-image.
Optionally, the terminal device further includes a third sub-image obtaining module, configured to receive a second input from the user; acquiring an image of an area surrounded by the second closed pattern in the first target image in response to the second input; receiving a third input of the user; acquiring an image of an area in a second target image enclosed by the third enclosing pattern in response to the third input; generating a third sub-image based on an image of a region in the first target image surrounded by the second closed pattern and an image of a region in the second target image surrounded by the third closed pattern.
Optionally, the terminal device 400 further includes a thumbnail display module, configured to display a thumbnail of each of the M sub-images.
Optionally, the first input includes a first sub-input and a second sub-input;
the output module is used for responding to the first sub-input and determining the target splicing sequence of the M sub-images; determining a target stitching template in response to the second sub-input; and splicing the M sub-images according to the target splicing template based on the target splicing sequence, and outputting a target splicing image.
Optionally, the first input comprises a third sub-input;
an output module further for updating a display of at least one target sub-image of the M sub-images in response to the third sub-input; and splicing the M sub-images based on the updated at least one target sub-image, and outputting a target spliced image.
Optionally, the updating the display of at least one target sub-image in the M sub-images includes at least one of:
moving at least one target sub-image of the M sub-images;
magnifying at least one target sub-image of the M sub-images;
reducing at least one target sub-image in the M sub-images;
and adjusting the display angle of at least one target sub-image in the M sub-images.
Optionally, the output module 404 is configured to obtain a target background image; and responding to the first input, splicing the M sub-images and the target background image according to the splicing parameters selected by the first input, and outputting a target spliced image taking the target background image as the background.
The terminal device 400 can implement each process implemented by the terminal device in the method embodiments of fig. 1 to fig. 2, and all can achieve the same or similar beneficial effects. To avoid repetition, further description is omitted here.
The terminal device 400 of the embodiment of the present invention receives N sliding inputs from the user; in response to the N sliding inputs, generating M sub-images, wherein the image contents of the M sub-images are associated with the sliding tracks of the N sliding inputs; receiving first input of a user to the M sub-images; responding to the first input, splicing the M sub-images according to the splicing parameters selected by the first input, and outputting a target spliced image; wherein M, N are all positive integers. Therefore, the user can intercept the content displayed on the display screen according to the sliding track formed by the sliding input through the sliding input to obtain the subimages, the user can customize the splicing parameters, the diversity and the interestingness of obtaining the subimages are increased, and meanwhile, the operation of the user in the process of obtaining the spliced images is simplified due to the simple operation of the process of obtaining the subimages.
Fig. 4 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, as shown in fig. 4, the terminal device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, a power supply 711, and the like. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 4 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, the processor 710 is configured to control the user input unit 707 to receive N sliding inputs of the user;
in response to the N sliding inputs, generating M sub-images, wherein the image contents of the M sub-images are associated with the sliding tracks of the N sliding inputs;
controlling the user input unit 707 to receive a first input of the M sub-images by the user;
responding to the first input, splicing the M sub-images according to the splicing parameters selected by the first input, and outputting a target spliced image; wherein M, N are all positive integers.
Optionally, the processor 710 is further configured to determine, for the ith sliding input, a closed pattern formed by the sliding track of the ith sliding input;
extracting an image of an area surrounded by a closed pattern formed by a sliding track input for the ith sliding in the target image to generate an ith sub-image;
wherein i is a positive integer, and i is not more than N.
Optionally, the processor 710 is further configured to determine an image currently displayed on the display screen as a target image;
or controlling the camera to shoot an image, and determining the image shot by the camera as the target image.
Optionally, the processor 710 is further configured to control the user input unit 707 to receive a kth sliding input of the user, where the kth sliding input is used to divide the first closed pattern into two closed patterns;
the first closed pattern is a closed pattern formed by a sliding track of any sliding input from the 1 st sliding input to the k-1 st sliding input;
generating M sub-images in response to the N sliding inputs, including:
determining a second closed pattern and a third closed pattern based on the first closed pattern and the sliding track of the kth sliding;
extracting an image of an area surrounded by the second closed pattern in the target image to generate a first sub-image;
and extracting an image of an area surrounded by the third closed pattern in the target image to generate a second sub-image.
Optionally, the processor 710 is further configured to control the user input unit 707 to receive a second input of the user;
acquiring an image of an area surrounded by the second closed pattern in the first target image in response to the second input;
receiving a third input of the user;
acquiring an image of an area in a second target image enclosed by the third enclosing pattern in response to the third input;
generating a third sub-image based on an image of a region in the first target image surrounded by the second closed pattern and an image of a region in the second target image surrounded by the third closed pattern.
Optionally, the processor 710 is further configured to:
the display unit 706 is controlled to display a thumbnail of each of the M sub-images.
Optionally, the first input includes a first sub-input and a second sub-input;
processor 710, further configured to:
determining a target splicing order of the M sub-images in response to the first sub-input;
determining a target stitching template in response to the second sub-input;
and splicing the M sub-images according to the target splicing template based on the target splicing sequence, and outputting a target splicing image.
Optionally, the first input comprises a third sub-input;
processor 710, further configured to:
updating display of at least one target sub-image of the M sub-images in response to the third sub-input;
and splicing the M sub-images based on the updated at least one target sub-image, and outputting a target spliced image.
Optionally, the updating the display of at least one target sub-image in the M sub-images includes at least one of:
moving at least one target sub-image of the M sub-images;
magnifying at least one target sub-image of the M sub-images;
reducing at least one target sub-image in the M sub-images;
and adjusting the display angle of at least one target sub-image in the M sub-images.
Processor 710, further configured to:
acquiring a target background image;
and responding to the first input, splicing the M sub-images and the target background image according to the splicing parameters selected by the first input, and outputting a target spliced image taking the target background image as the background.
The terminal device 700 can implement each process implemented by the terminal device in the foregoing embodiments, and details are not described here to avoid repetition.
The terminal device 700 of the embodiment of the present invention receives N sliding inputs from a user; in response to the N sliding inputs, generating M sub-images, wherein the image contents of the M sub-images are associated with the sliding tracks of the N sliding inputs; receiving first input of a user to the M sub-images; responding to the first input, splicing the M sub-images according to the splicing parameters selected by the first input, and outputting a target spliced image; wherein M, N are all positive integers. Therefore, the user can intercept the content displayed on the display screen according to the sliding track formed by the sliding input through the sliding input to obtain the subimages, the user can customize the splicing parameters, the diversity and the interestingness of obtaining the subimages are increased, and meanwhile, the operation of the user in the process of obtaining the spliced images is simplified due to the simple operation of the process of obtaining the subimages.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 710; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.
The terminal device provides the user with wireless broadband internet access through the network module 702, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may also provide audio output related to a specific function performed by the terminal device 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
The input unit 704 is used to receive audio or video signals. The input Unit 704 may include a sliding trajectory processor (GPU) 7041 and a microphone 7042, and the sliding trajectory processor 7041 processes image data of a still image or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 706. The image frames processed by the sliding trajectory processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio frequency unit 701 or the network module 702. The microphone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 701 in case of a phone call mode.
The terminal device 700 further comprises at least one sensor 705, such as light sensors, motion sensors and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the luminance of the display panel 7061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 7061 and/or a backlight when the terminal device 700 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 706 is used to display information input by the user or information provided to the user. The Display unit 706 may include a Display panel 7061, and the Display panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near the touch panel 7071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 710, receives a command from the processor 710, and executes the command. In addition, the touch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Alternatively, the touch panel 7071 may be overlaid on the display panel 7061, and when the touch panel 7071 detects a touch operation on or near the touch panel 7071, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7061 according to the type of the touch event. Although in fig. 4, the touch panel 7071 and the display panel 7061 are implemented as two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated to implement the input and output functions of the terminal device, which is not limited herein.
The interface unit 708 is an interface for connecting an external device to the terminal apparatus 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 700 or may be used to transmit data between the terminal apparatus 700 and the external device.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 710 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby performing overall monitoring of the terminal device. Processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The terminal device 700 may further include a power supply 711 (e.g., a battery) for supplying power to various components, and preferably, the power supply 711 may be logically connected to the processor 710 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 700 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 710, a memory 709, and a computer program stored in the memory 709 and capable of running on the processor 710, where the computer program is executed by the processor 710 to implement each process of the above-mentioned image processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. An image processing method, comprising:
receiving N sliding inputs of a user;
in response to the N sliding inputs, generating M sub-images, wherein the image contents of the M sub-images are associated with the sliding tracks of the N sliding inputs;
receiving first input of a user to the M sub-images;
responding to the first input, splicing the M sub-images according to the splicing parameters selected by the first input, and outputting a target spliced image;
wherein M, N are all positive integers;
the receiving of N sliding inputs of a user comprises:
receiving a kth sliding input of a user, wherein the kth sliding input is used for dividing a first closed pattern into two closed patterns;
the first closed pattern is a closed pattern formed by a sliding track of any sliding input from the 1 st sliding input to the k-1 st sliding input;
generating M sub-images in response to the N sliding inputs, including:
determining a second closed pattern and a third closed pattern based on the first closed pattern and the sliding track of the kth sliding;
extracting an image of an area surrounded by the second closed pattern in the target image to generate a first sub-image;
and extracting an image of an area surrounded by the third closed pattern in the target image to generate a second sub-image.
2. The method of claim 1, wherein generating M sub-images in response to the N sliding inputs comprises:
for the ith sliding input, determining a closed pattern formed by the sliding track of the ith sliding input;
extracting an image of an area surrounded by a closed pattern formed by a sliding track input for the ith sliding in the target image to generate an ith sub-image;
wherein i is a positive integer, and i is not more than N.
3. The method according to claim 2, wherein after determining the closed pattern formed by the sliding track of the ith sliding input for the ith sliding input, before extracting the image of the region surrounded by the closed pattern formed by the sliding track of the ith sliding input in the target image and generating the ith sub-image, the method further comprises:
determining an image currently displayed on a display screen as a target image;
or controlling the camera to shoot an image, and determining the image shot by the camera as the target image.
4. The method of claim 1, wherein after determining the second closed pattern and the third closed pattern based on the first closed pattern and the sliding trajectory of the kth sliding, further comprising:
receiving a second input of the user;
acquiring an image of an area surrounded by the second closed pattern in the first target image in response to the second input;
receiving a third input of the user;
acquiring an image of an area in a second target image enclosed by the third enclosing pattern in response to the third input;
generating a third sub-image based on an image of a region in the first target image surrounded by the second closed pattern and an image of a region in the second target image surrounded by the third closed pattern.
5. The method of claim 1, wherein after receiving the user's N sliding inputs, further comprising:
and displaying the thumbnail of each sub-image in the M sub-images.
6. The method of claim 1, wherein the first input comprises a first sub-input and a second sub-input;
the responding to the first input, splicing the M sub-images according to the splicing parameters selected by the first input, and outputting a target spliced image, includes:
determining a target splicing order of the M sub-images in response to the first sub-input;
determining a target stitching template in response to the second sub-input;
and splicing the M sub-images according to the target splicing template based on the target splicing sequence, and outputting a target splicing image.
7. The method of claim 1, wherein the first input comprises a third sub-input;
the responding to the first input, splicing the M sub-images according to the splicing parameters selected by the first input, and outputting a target spliced image, further includes:
updating display of at least one target sub-image of the M sub-images in response to the third sub-input;
and splicing the M sub-images based on the updated at least one target sub-image, and outputting a target spliced image.
8. The method of claim 7, wherein said updating the display of at least one target sub-image of said M sub-images comprises at least one of:
moving at least one target sub-image of the M sub-images;
magnifying at least one target sub-image of the M sub-images;
reducing at least one target sub-image in the M sub-images;
and adjusting the display angle of at least one target sub-image in the M sub-images.
9. The method according to claim 1, wherein said stitching the M sub-images according to the stitching parameters selected by the first input to output a target stitched image comprises:
acquiring a target background image;
and responding to the first input, splicing the M sub-images and the target background image according to the splicing parameters selected by the first input, and outputting a target spliced image taking the target background image as the background.
10. A terminal device, comprising:
the first receiving module is used for receiving N sliding inputs of a user;
a first response module, configured to generate M sub-images in response to the N sliding inputs, where image contents of the M sub-images are associated with sliding tracks of the N sliding inputs;
the second receiving module is used for receiving first input of the user to the M sub-images;
the output module is used for responding to the first input, splicing the M sub-images according to the splicing parameters selected by the first input and outputting a target spliced image;
wherein M, N are all positive integers;
the first receiving module is specifically configured to receive a kth sliding input of a user, where the kth sliding input is used to divide a first closed pattern into two closed patterns; the first closed pattern is a closed pattern formed by a sliding track of any sliding input from the 1 st sliding input to the k-1 st sliding input;
a first response module, specifically configured to determine a second closed pattern and a third closed pattern based on the first closed pattern and the sliding trajectory of the kth sliding; extracting an image of an area surrounded by the second closed pattern in the target image to generate a first sub-image; and extracting an image of an area surrounded by the third closed pattern in the target image to generate a second sub-image.
11. A terminal device, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the image processing method according to any one of claims 1 to 9.
12. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 9.
CN201811554757.XA 2018-12-19 2018-12-19 Image processing method and terminal equipment Active CN109683777B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811554757.XA CN109683777B (en) 2018-12-19 2018-12-19 Image processing method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811554757.XA CN109683777B (en) 2018-12-19 2018-12-19 Image processing method and terminal equipment

Publications (2)

Publication Number Publication Date
CN109683777A CN109683777A (en) 2019-04-26
CN109683777B true CN109683777B (en) 2020-11-17

Family

ID=66186885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811554757.XA Active CN109683777B (en) 2018-12-19 2018-12-19 Image processing method and terminal equipment

Country Status (1)

Country Link
CN (1) CN109683777B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555816B (en) * 2019-09-09 2021-10-29 珠海金山网络游戏科技有限公司 Picture processing method and device, computing equipment and storage medium
CN111338519B (en) * 2020-02-04 2022-05-06 华为技术有限公司 Display method and electronic equipment
CN111629268B (en) * 2020-05-21 2022-07-22 Oppo广东移动通信有限公司 Multimedia file splicing method and device, electronic equipment and readable storage medium
CN112162805B (en) * 2020-09-23 2023-05-19 维沃移动通信有限公司 Screenshot method and device and electronic equipment
CN112286474B (en) * 2020-10-28 2023-03-14 杭州海康威视数字技术股份有限公司 Image processing method, device and system and display controller
CN112437231B (en) * 2020-11-24 2023-11-14 维沃移动通信(杭州)有限公司 Image shooting method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102263706A (en) * 2010-05-26 2011-11-30 腾讯科技(深圳)有限公司 Image interception method and apparatus thereof
CN102681829A (en) * 2011-03-16 2012-09-19 阿里巴巴集团控股有限公司 Screenshot method, device and communication client
CN102779008A (en) * 2012-06-26 2012-11-14 奇智软件(北京)有限公司 Screen screenshot method and system
WO2015144019A1 (en) * 2014-03-26 2015-10-01 努比亚技术有限公司 Photo sharing method and mobile terminal
CN106909290A (en) * 2017-04-06 2017-06-30 深圳天珑无线科技有限公司 A kind of method and device of screenshotss
CN108037871A (en) * 2017-11-07 2018-05-15 维沃移动通信有限公司 Screenshotss method and mobile terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108897475B (en) * 2018-05-31 2020-09-29 维沃移动通信有限公司 Picture processing method and mobile terminal
CN108898552B (en) * 2018-06-27 2023-09-12 图为信息科技(深圳)有限公司 Picture splicing method, double-screen terminal and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102263706A (en) * 2010-05-26 2011-11-30 腾讯科技(深圳)有限公司 Image interception method and apparatus thereof
CN102681829A (en) * 2011-03-16 2012-09-19 阿里巴巴集团控股有限公司 Screenshot method, device and communication client
CN102779008A (en) * 2012-06-26 2012-11-14 奇智软件(北京)有限公司 Screen screenshot method and system
WO2015144019A1 (en) * 2014-03-26 2015-10-01 努比亚技术有限公司 Photo sharing method and mobile terminal
CN106909290A (en) * 2017-04-06 2017-06-30 深圳天珑无线科技有限公司 A kind of method and device of screenshotss
CN108037871A (en) * 2017-11-07 2018-05-15 维沃移动通信有限公司 Screenshotss method and mobile terminal

Also Published As

Publication number Publication date
CN109683777A (en) 2019-04-26

Similar Documents

Publication Publication Date Title
CN108668083B (en) Photographing method and terminal
CN107995429B (en) Shooting method and mobile terminal
CN109683777B (en) Image processing method and terminal equipment
CN108495029B (en) Photographing method and mobile terminal
JP7359920B2 (en) Image processing method and flexible screen terminal
CN108471498B (en) Shooting preview method and terminal
CN109343755B (en) File processing method and terminal equipment
CN110007837B (en) Picture editing method and terminal
CN110933306A (en) Method for sharing shooting parameters and electronic equipment
CN110213440B (en) Image sharing method and terminal
CN111050070B (en) Video shooting method and device, electronic equipment and medium
CN108259761B (en) Shooting method and terminal
CN109102555B (en) Image editing method and terminal
CN111010512A (en) Display control method and electronic equipment
CN108108079B (en) Icon display processing method and mobile terminal
CN109413333B (en) Display control method and terminal
CN111159449B (en) Image display method and electronic equipment
CN108924422B (en) Panoramic photographing method and mobile terminal
KR20220005087A (en) Filming method and terminal
CN111597370A (en) Shooting method and electronic equipment
CN111464746B (en) Photographing method and electronic equipment
CN108132749B (en) Image editing method and mobile terminal
CN110321449B (en) Picture display method and terminal
CN111182211B (en) Shooting method, image processing method and electronic equipment
CN110536005B (en) Object display adjustment method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant