CN109559280B - Image processing method and terminal - Google Patents

Image processing method and terminal Download PDF

Info

Publication number
CN109559280B
CN109559280B CN201811553560.4A CN201811553560A CN109559280B CN 109559280 B CN109559280 B CN 109559280B CN 201811553560 A CN201811553560 A CN 201811553560A CN 109559280 B CN109559280 B CN 109559280B
Authority
CN
China
Prior art keywords
input
image
splicing
ith
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811553560.4A
Other languages
Chinese (zh)
Other versions
CN109559280A (en
Inventor
李兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811553560.4A priority Critical patent/CN109559280B/en
Publication of CN109559280A publication Critical patent/CN109559280A/en
Application granted granted Critical
Publication of CN109559280B publication Critical patent/CN109559280B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The invention discloses an image processing method and a terminal, wherein the method comprises the following steps: receiving N inputs; responding to the N times of input, and generating a spliced image formed by splicing N images corresponding to the N times of input; the splicing parameters of the N images are associated with the input parameters input for N times, and the splicing parameters comprise at least one of splicing positions and splicing angles; n is an integer greater than 1. According to the invention, the N images are spliced according to the splicing parameters corresponding to the N input parameters, so that the diversity of the splicing effect is improved.

Description

Image processing method and terminal
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image processing method and a terminal.
Background
With the development of terminals, functions of the terminals are also becoming more and more diversified, and among them, an image processing function has become one of basic functions of the terminals. At present, the image stitching processing generally selects a plurality of images at one time, and generates pines and images according to a fixed stitching template, and the stitching mode is limited by the immobilization of the template style; particularly, when a plurality of screen capturing images are required to be spliced, the screen capturing images are required to be spliced through an image processing application after being intercepted for a plurality of times, and the operation is complex and the template style is also limited by the immobilization. In order to simplify the operation, the terminal can intercept long pictures (the length of the pictures is larger than the width of the display interface when the width of the pictures is the same as the width of the display interface), and the mode of intercepting the long pictures can intercept more contents, but the content display mode is single, and the display has limitation.
Disclosure of Invention
The invention provides an image processing method and a terminal, which are used for solving the problem of mode limitation of image stitching processing in the prior art.
In order to solve the technical problems, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, including:
receiving N inputs;
responding to the N times of input, and generating a spliced image formed by splicing N images corresponding to the N times of input;
the splicing parameters of the N images are associated with the input parameters input for N times, and the splicing parameters comprise at least one of splicing positions and splicing angles; n is an integer greater than 1.
In a second aspect, an embodiment of the present invention further provides a terminal, including:
the receiving module is used for receiving N times of input;
the response module is used for responding to the N times of input and generating a spliced image formed by splicing N images corresponding to the N times of input;
the splicing parameters of the N images are associated with the input parameters input for N times, and the splicing parameters comprise at least one of splicing positions and splicing angles; n is an integer greater than 1.
In a third aspect, an embodiment of the present invention further provides a terminal, including a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program implementing the steps of the image processing method as described above when executed by the processor.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image processing method as described above.
In the embodiment of the invention, N times of input are received, and in response to the N times of input, a spliced image formed by splicing N images corresponding to the N times of input is generated, wherein the splicing parameters of the N images are associated with the input parameters of the N times of input, and the splicing parameters comprise at least one of splicing positions and splicing angles. Therefore, the N images are spliced according to the splicing parameters corresponding to the N input parameters, so that the diversity of the splicing effect is improved, the diversity adjustment and selection of the splicing positions by a user are ensured, and the diversity of the splicing operation is improved.
Drawings
FIG. 1 shows a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an input mode selection box according to an embodiment of the present invention;
FIG. 3 is a schematic view of an operation button for performing splicing according to an embodiment of the present invention;
FIG. 4 shows a flow chart of determining a reference image according to an embodiment of the invention;
FIG. 5 is a schematic diagram of a splice mark according to an embodiment of the invention;
fig. 6a shows a schematic diagram of an input direction of an ith input according to an embodiment of the present invention as a splice position corresponding to a right-to-left direction;
fig. 6b is a schematic diagram showing an input direction of an ith input as a splicing position corresponding to left-to-right in the embodiment of the present invention;
fig. 6c shows a schematic diagram of an input direction of an ith input according to an embodiment of the present invention as a splice position corresponding from bottom to top;
fig. 6d is a schematic diagram showing that the input direction of the ith input is the corresponding splicing position from top to bottom according to the embodiment of the present invention;
fig. 6e shows a schematic diagram of an input direction of an ith input according to an embodiment of the present invention as a splice position corresponding to a bottom right-to-top left-to-top direction;
fig. 6f shows a schematic diagram of an input direction of an ith input according to an embodiment of the present invention as a splice position corresponding from bottom left to top right;
fig. 6g shows a schematic diagram of an input direction of an ith input according to an embodiment of the present invention as a splice position corresponding from top right to bottom left;
fig. 6h shows a schematic diagram of an input direction of the ith input according to an embodiment of the present invention as a splicing position corresponding from top left to bottom right;
FIG. 7a is a schematic illustration of an ith image according to an embodiment of the present invention;
FIG. 7b is a schematic diagram showing the stitching of an ith image and an ith-1 image according to an embodiment of the present invention;
FIG. 7c is a schematic diagram showing the stitching of the (i+1) th image and the (i) th image according to an embodiment of the present invention;
FIG. 7d is a schematic diagram showing the stitching of the (i+1) th image of the (i+2) th image according to an embodiment of the present invention;
FIG. 7e is a schematic diagram showing the stitching of the (i+3) th image and the (i+2) th image according to an embodiment of the present invention;
FIG. 7f is a schematic diagram showing the stitching of the (i+4) th image and the (i+3) th image according to an embodiment of the present invention;
FIG. 7g is a schematic diagram showing a stitched image obtained by stitching from the i-1 st image to the i+4 th image according to an embodiment of the present invention;
FIG. 8 illustrates a first two-finger sliding input schematic of an embodiment of the present invention;
FIG. 9 illustrates a second two-finger sliding input schematic of an embodiment of the present invention;
FIG. 10 illustrates a third two-finger sliding input schematic of an embodiment of the present invention;
FIG. 11 illustrates a fourth two-finger sliding input schematic of an embodiment of the present invention;
FIG. 12 shows a block diagram of a mobile terminal according to an embodiment of the invention;
fig. 13 is a schematic diagram of a hardware structure of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, an embodiment of the present invention provides a screen capturing method, including:
step 11: n inputs are received.
Step 12: and responding to the N times of input, and generating a spliced image formed by splicing N images corresponding to the N times of input.
The splicing parameters of the N images are associated with the input parameters input for N times, and the splicing parameters comprise at least one of splicing positions and splicing angles; n is an integer greater than 1.
Wherein each of the N inputs includes an image acquisition sub-input and an image stitching input.
As one implementation, the image acquisition input may be an input that triggers a physical key or a virtual key, and in response to each of the image acquisition inputs, a screen capture image is acquired. The image stitching input may be a sliding input of a user on a display screen of the terminal, or an image input of a head movement of the user; and in response to each image stitching input, stitching the screen capturing images acquired according to the image acquisition input.
As another implementation, the image acquisition input may be a selection input by which a user selects a target image from a plurality of images, for example: selecting photos from the album; acquiring a photograph in response to each of the image acquisition inputs; the image stitching input may be a sliding input of a user on a display screen of the terminal, or an image input of a head movement of the user; and in response to each image stitching input, stitching the photos acquired according to the image acquisition input.
In the above scheme, by receiving N inputs and responding to the N inputs, a spliced image formed by splicing N images corresponding to the N inputs is generated, and splicing parameters of the N images are associated with the input parameters of the N inputs, wherein the splicing parameters comprise at least one of splicing positions and splicing angles. Therefore, the N images are spliced according to the splicing parameters corresponding to the N input parameters, so that the diversity of the splicing effect is improved, the diversity adjustment and selection of the splicing positions by a user are ensured, and the diversity of the splicing operation is improved.
The step 11 specifically includes: receiving at least one sliding input of a user on a display screen of a terminal; or, an image input of at least one head movement of the user is received.
Further, before receiving N inputs, the method further comprises:
displaying an input mode selection frame in the display screen under the condition of starting an image processing mode;
receiving selection input of the input mode of the N times of input;
and responding to the selection input, and determining the input mode of the N times of input.
The input mode selection box comprises N input mode options. For example: options for the N-input mode include, but are not limited to: gesture inputs (e.g., slide inputs) acting on the display screen, image inputs of head movements, etc. An example of an input mode selection box is given in fig. 2. Of course, the shape of the input mode selection frame 21 may be other shapes, such as: rectangular, circular, diamond other polygons or special shapes, etc.; the display mode of the options of the N-input mode is not limited to the mode shown in fig. 2.
As an implementation, after selecting the determining input mode, all buttons may be selected, that is, the N inputs are set to be at least one sliding input of the user on the display screen of the terminal, or the N inputs are all image inputs of head movement. The method can be selected once, and the input method of N times of input is determined, so that the method is convenient to operate and is beneficial to simplifying user operation.
As another implementation, after selecting the confirm input mode, a single button may be selected, that is, the current input is set as the sliding input of the user on the display screen of the terminal or the image input of the head movement of the user. This approach may ensure that the N inputs are different inputs, such as: the N times of input are the sliding input of p times of users on the display screen of the terminal and the image input of (N-p) times of user head movement, thereby being beneficial to improving the diversity of input modes and facilitating the selection of users.
In this embodiment, an input mode selection box is displayed in the display screen, so that the user can conveniently select the input mode of N inputs, and it is ensured that the user can select a suitable input mode in different scenes, so as to facilitate operation.
Specifically, the step 11 further includes: an ith input is received.
The step 12 specifically includes: acquiring an ith image in response to the ith input;
acquiring an ith input parameter of the ith input;
determining a reference image;
splicing the ith image to a target position of the reference image based on the ith input parameter;
wherein i is an integer greater than 1, and i is less than or equal to N.
As an implementation manner, the step of acquiring the ith image specifically includes:
performing screen capturing operation to generate a screen capturing image;
and determining the screen capturing image as an ith image.
Specifically, in response to the ith input, performing an ith screen capturing operation, acquiring an ith screen capturing image, and determining the ith screen capturing image as the ith image.
For example: in the case of starting the screen capturing mode, a screen capturing operation is performed, and an ith screen capturing image is generated. Specifically, the screen capturing function can be started in a triggering mode of a physical key or a virtual key. Preferably, the ith screen capturing image is obtained while the screen capturing function is started in a triggering mode of a physical key or a virtual key.
As another implementation manner, the step of acquiring the ith image specifically includes:
controlling a camera to shoot an image;
And determining the image shot by the camera as an ith image.
Specifically, a camera is controlled to shoot an ith image, and the ith image shot by the camera is determined to be the ith image.
In addition, N images corresponding to N inputs include at least one screen capturing image and an image captured by at least one camera, or include N Zhang Jiebing images, or include images captured by N cameras.
Wherein the i-th input parameter includes: at least one of the movement track, the number of movement tracks and the movement distance of the ith input.
Specifically, a stitching position may be determined according to the movement track, where the stitching position is a stitching position of the ith screen capturing image relative to the reference image. Among them, for example: under the condition that the ith input user slides on the display screen of the terminal, if the sliding direction is along the left-to-right direction of the display screen, splicing the screen capturing image on the right side of the reference image; if the sliding direction is along the direction from the lower left to the upper right of the display screen, the screen capturing images are spliced on the upper right side of the reference image (the upper right corner of the reference image is connected with the upper left corner of the screen capturing images), so that a plurality of screen capturing images can be spliced.
In the case of inputting an image in which the ith input is the head motion of the user, the stitching position of the ith screen capturing image relative to the reference image can be determined according to the track of the head motion. For example: the head movement trace (direction) of the user may be detected by the face recognition module. Specifically, the manner of determining the splicing position according to the head movement track (direction) is similar to the manner of determining the splicing position according to the sliding direction, and will not be described here again.
The reference image may be determined according to the number of movement tracks input by the i-th input, for example: and determining the kth screen capturing image from the 1 st screen capturing image to the i-1 st screen capturing image as a reference image according to the number k of the moving tracks. For example: the number of the moving tracks can be determined according to the bending times of the moving tracks, for example: and (3) equivalently taking the track input by the sliding input or the image input by the head motion as a folding line, if the folding number of the folding line is 1, determining that the number of the moving tracks is 2, and if the folding number of the folding line is 2, determining that the number of the moving tracks is 3, namely, if the folding number of the moving tracks is L, and if the number of the moving tracks is L+1.
In addition, a stitching angle can be determined according to the sliding distance, wherein the stitching angle is a stitching angle between the ith screen capturing image and the reference image.
In this embodiment, if a user input instruction to complete stitching is obtained in a case where the i-th image is stitched to the target position of the reference image based on the i-th input parameter, the stitched image is output, for example: displaying the spliced image in a current interface, or sending the spliced image to a preset folder for storage, or sending the spliced image to target equipment (target contact person) and the like; if the screen capturing instruction is obtained again, the (i+1) screen capturing image is obtained according to the screen capturing instruction, and the specific content of the step 12 is executed until the splicing instruction input by the user is obtained.
As shown in fig. 3, an example of an operation button 31 for completing the splicing is given, and the operation button 31 may be displayed in the lower right corner of the display interface. Preferably, the operation button 31 may be displayed on the display interface after the step of generating the stitched image and before other operations than the stitching instruction input by the user are acquired. That is, in the case where the operation button 31 is displayed on the current display interface, if the n+1st input of the user is received, the display of the operation button 31 is canceled, and the n+1st input is responded.
In the scheme, the N images are spliced according to the splicing parameters corresponding to the N input parameters, so that the diversity of the splicing effect is improved, the diversity adjustment and selection of the splicing positions by a user are ensured, and the diversity of the splicing operation is improved. In particular, at least two screen capturing images obtained by screen capturing are spliced in the screen capturing process, the content of at least two display interfaces is contained in one spliced image, the richness of the screen capturing content is improved, the complicated process that splicing operation is required to be performed through an application program with a jigsaw function after at least two screen capturing images are obtained by screen capturing respectively can be avoided, splicing is performed according to the position indication operation input by a user, the diversity adjustment and selection of splicing positions by the user can be ensured, and the diversity of the screen capturing operation is improved.
As shown in fig. 4, as an implementation manner, the step of determining the reference image specifically includes:
step 121: a first input of a user is received.
Step 122: in response to the first input, one of the 1 st to i-1 st images is determined to be the reference image.
Further, methods of determining a reference image include, but are not limited to, the following implementations:
mode one: after receiving an ith input, generating a thumbnail of an ith image corresponding to the ith input; and displaying a stitching process of the thumbnail of the ith image to the target position of the thumbnail of the reference image.
The step 121 specifically includes: receiving a first input of a user to a target thumbnail; the step 122 specifically includes: in response to the first input, determining a target image corresponding to the target thumbnail as the reference image; the target thumbnail is one of the 1 st thumbnail to the i-1 st thumbnail.
Specifically, the thumbnail can be displayed on a floating window at the lower left corner of the display interface, so that a user can intuitively know the splicing process of the spliced i-1 images and can know the spliceable position of the i-th image when splicing the i-th image; and through the first input of the target thumbnail, the reference image can be intuitively and accurately determined.
Mode two: and displaying a splicing identifier before receiving the ith input, wherein the splicing identifier indicates at least one alternative splicing position.
An example of a splice identification is given in fig. 5. The arrow in the stitching identifier 51 represents an alternative stitching position of the ith image on the stitched image, 8 arrows are given in fig. 5, and represent 8 alternative stitching positions of the ith image on the reference image, so that a user can determine a target position where the ith image and the reference image are stitched, and then input for the ith time according to the alternative stitching position.
The step 121 specifically includes: receiving a first input of a user to the splicing identifier; the step 122 specifically includes: and responding to the first input, moving the splicing mark, and determining an image of the splicing mark at a position corresponding to the input ending time of the first input as the reference image.
Specifically, the first input to the stitching identifier 51 may be a movement input to the stitching identifier 51, and in response to the movement input, the stitching identifier 51 is moved, and an image corresponding to a position of the stitching identifier 51 at an input end time of the first input is determined as the reference image.
Further, in the process of moving the splicing mark, a first mark is displayed in a floating window in the process of displaying the splicing mark in fig. 5 to move along with the splicing mark, and the image corresponding to the thumbnail where the first mark is located is determined as the reference image at the input end time of the first input by the splicing mark 51.
For example: the target image corresponding to the target thumbnail of the position of the splicing mark 51 at the input end time of the first input may be determined as the reference image, so that the selection of the reference image is indicated by the splicing mark, which is convenient for the user to flexibly select the reference image, and is beneficial to improving the accuracy of operation, avoiding the problem that false triggering may exist under the condition that the thumbnails are too small and the arrangement of multiple thumbnails is compact.
As another implementation manner, the step of determining the reference image specifically includes: and determining the i-1 th image as the reference image.
In the embodiment, in the process of receiving N inputs and splicing N images corresponding to the N inputs, the i-1 th image is used as the reference image of the i-th image, so that the operation process of selecting the reference image by a user can be saved, and the operation process is facilitated to be simplified.
As still another implementation manner, the step of determining the reference image specifically includes: and determining a kth image as the reference image, wherein the kth image is the reference image determined by one of the 2 nd input to the i-1 st input, k is an integer greater than 1, and k is less than or equal to i-1.
In this embodiment, the kth image is the reference image determined by one of the 2 nd input to the i-1 st input, and is used as the reference image of the ith image, which can avoid the tedious process of repeatedly selecting the same reference image, and is beneficial to simplifying the user operation.
The ith input parameter is the input direction of the ith input; the stitching the ith image to the target position of the reference image based on the ith input parameter includes:
determining a target position based on the input direction of the ith input;
and splicing the ith image to the target position of the reference image.
Specifically, the ith input for indicating the target position may be an input by the user according to the splice mark 51 in fig. 5.
In this embodiment, based on the input direction of the ith input, the target position for stitching the ith image to the reference image is determined, which is favorable for improving the diversity setting of the stitching position, thereby improving the diversity effect of the stitched image.
Further, the splice mark 51 may be displayed with a preset transparency (e.g., a transparency of 50%). In the case of 8 alternative stitching locations on the reference image, highlighting (e.g., a predetermined color or pattern, etc.) the 8 alternative stitching locations (arrows); under the condition that the reference image has less than 8 alternative splicing positions, the arrow corresponding to the spliceable positions is highlighted, and the arrows of other spliceable positions are displayed with preset transparency, so that a user can distinguish the alternative splicing positions which can be spliced on the reference image, and specific operation is carried out.
The following describes the splice location indicated by the splice mark with reference to the accompanying drawings:
when the ith input is received and the image a has been obtained before and the ith image B is obtained, if the ith input from right to left along the display screen is obtained (the input direction is shown by the black arrow on the stitching mark 51 in fig. 6 a), the ith image B is stitched on the left side of the reference image a (as shown in the rectangular box in fig. 6 a);
if an input from left to right along the display screen is obtained (the input direction is shown by the black arrow on the stitching marker 51 in fig. 6B), stitching the ith image B to the right of the reference image a (as shown in the rectangular box in fig. 6B);
If an input from bottom to top along the display screen is acquired (the input direction is shown by the black arrow on the stitching mark 51 in fig. 6 c), stitching the ith image B on the upper side of the reference image a (as shown in the rectangular box in fig. 6 c);
if an input from top to bottom along the display screen is obtained (the input direction is shown by the black arrow on the stitching mark 51 in fig. 6 d), stitching the ith image B to the lower side of the reference image a (as shown in the rectangular box in fig. 6 d);
if an input along the display screen from bottom right to top left is acquired (the input direction is shown by the black arrow on the stitching mark 51 in fig. 6 e), stitching the ith image B on the top left side of the reference image a (as shown in the rectangular box in fig. 6 e);
if an input is acquired from bottom left to top right along the display screen (the input direction is shown by the black arrow on the stitching marker 51 in fig. 6 f), stitching the ith image B to the top right side of the reference image a (as shown in the rectangular box in fig. 6 f);
if an input from top right to bottom left along the display screen is acquired (the input direction is shown by the black arrow on the stitching mark 51 in fig. 6 g), stitching the ith image B to the bottom left side of the reference image a (as shown in the rectangular box in fig. 6 g);
If an input is acquired from top left to bottom right along the display screen (the input direction is shown by the black arrow on the stitching marker 51 in fig. 6 h), the i-th image B is stitched to the bottom right side of the reference image a (as shown in the rectangular box in fig. 6 h).
In addition, if an image exists at the target position determined according to the input direction of the ith input, the ith screen capturing image is overlaid on the image at the target position or replaced by the image at the target position, so that the spliced image position is adjusted in the splicing process.
Further, stitching the ith image and the reference image according to the target position, and after generating the stitched image, further including:
and displaying the thumbnail of the spliced image in a preset display area of the display screen.
As in fig. 7a, an example of a thumbnail of a stitched image is given. Specifically, if the first image acquired currently may not need to be stitched, the stitching mark may not be displayed after the first screen capturing image is obtained, and after the first image is acquired, the thumbnail 7a of the first image is displayed at the lower left corner of the display interface; if the i-th image is currently acquired, the i-th image may need to be stitched with one of the 1 st to i-th images, after the first image is acquired, a stitching identifier may be displayed to prompt a user to input a position indication operation, and a target position determined according to the input direction of the i-th input of the user, and the i-th image is stitched on the target position of the reference image, and the stitching process is displayed through a thumbnail.
The image processing method according to the embodiment of the invention is specifically described below with reference to the accompanying drawings:
taking the sliding input and the i-1 th image as the reference image as an example, if the i-th image is acquired and the sliding input sliding from left to right is acquired on the basis of fig. 7a (the sliding direction is shown by a black arrow in fig. 7 b), the i-th image is spliced on the right side of the reference image as shown by a thumbnail 7b in fig. 7 b;
on the basis of fig. 7b, if the i+1th image is acquired and a sliding input (the sliding direction is indicated by a black arrow in fig. 7 c) is acquired to slide from the upper right to the lower left, the i+1th image is stitched to the lower left of the i-th image, as indicated by a thumbnail 7c in fig. 7 c;
on the basis of fig. 7c, if the (i+2) th image is acquired and a sliding input (the sliding direction is indicated by a black arrow in fig. 7 d) is acquired to slide from left to right, the (i+2) th image is stitched to the right side of the (i+1) th image, as indicated by a thumbnail 7d in fig. 7 d;
on the basis of fig. 7d, if the (i+3) th image is acquired and a sliding input (the sliding direction is shown by the black arrow in fig. 7 e) sliding from top to bottom is acquired, the (i+3) th image is stitched to the lower side of the (i+2) th image, as shown by the thumbnail 7e in fig. 7 e;
On the basis of fig. 7e, if the (i+4) th image is acquired and a sliding input (the sliding direction is shown by the black arrow in fig. 7 f) sliding from left to right is acquired, the (i+4) th image is stitched on the right side of the (i+3) th screen shot image, as shown by the thumbnail 7f in fig. 7 f;
in the case of obtaining the stitched image corresponding to the thumbnail 7f in fig. 7f, the stitching operation may be completed by clicking the operation button 31 for completing the stitching as shown in fig. 3, and displaying the stitched image in the display screen, as shown in fig. 7 g.
According to the scheme, at least two screen capturing images obtained by screen capturing are spliced in the screen capturing process, the content of at least two display interfaces is contained in one spliced image, the richness of the screen capturing content is improved, the complicated process that splicing operation is needed through an application program with a jigsaw function after at least two screen capturing images are obtained by screen capturing respectively can be avoided, splicing is carried out according to the position indication operation input by a user, the diversity adjustment and selection of splicing positions by the user can be guaranteed, and the diversity of the screen capturing operation is improved.
The N times of input comprise a first sliding sub-input and a second sliding sub-input which are used for selecting a first target image and a second target image; the N images comprise the first target image and the second target image;
The step 12 specifically includes:
and adjusting a stitching angle between the first target image and the second target image in response to the first sliding sub-input and the second sliding sub-input.
The stitching angle is an angle between a plane where the first target image is located and a plane where the second target image is located. Wherein the first target image and the second target image may or may not be adjacent.
Specifically, the first sliding sub-input and the second sliding sub-input may be a two-finger sliding input, that is, two fingers simultaneously act on the display screen, wherein one finger slides along a first direction and the other finger slides along a second direction, and the first direction and the second direction are opposite.
For example: the two-finger sliding input may be a sliding input in which two fingers slide toward each other along the width direction of the display screen, as shown by a first two-finger sliding input 81 of fig. 8; the two-finger sliding operation may also be a sliding operation in which two fingers slide back along the width direction of the display screen, as shown in the second two-finger sliding input 91 in fig. 9; the two-finger sliding input may also be a sliding input in which two fingers slide in opposite directions along the length direction of the display screen, as shown in the third two-finger sliding input 101 in fig. 10; the two-finger sliding input may also be a sliding input in which two fingers slide back along the length direction of the display screen, as shown in the fourth two-finger sliding input 111 in fig. 11.
In this embodiment, by adjusting the stitching angle of at least two target images in the N images, it is advantageous to improve the diversity effect of the stitched images.
Further, the adjusting the stitching angle between the first target image and the second target image in response to the first sliding sub-input and the second sliding sub-input includes:
acquiring the sliding direction and the sliding distance of the first sliding sub-input and the second sliding sub-input;
and adjusting a splicing angle between the first target image and the second target image according to the sliding direction and the sliding distance.
Still further, the adjusting the stitching angle between the first target image and the second target image according to the sliding direction and the sliding distance includes:
determining the rotation direction of the target image to be adjusted according to the sliding direction;
determining the rotation angle of the target image to be adjusted according to the sliding distance;
according to the rotation direction, rotating the target image to be adjusted by the rotation angle;
the target image to be adjusted is at least one of the first target image and the second target image.
Specifically, as shown in the enlarged frame 81 in fig. 8 and the enlarged frame 102 in fig. 10, when the two-finger sliding input is a sliding input in which two fingers slide in opposite directions along the width direction or the length direction of the display screen, the first target image B and/or the second target image pattern a is rotated toward the display screen and rotated to the splicing angle; as shown in the enlarged frame 92 in fig. 9 and the enlarged frame 112 in fig. 11, in the case where the two-finger sliding input is a sliding input in which the two fingers slide back along the width direction or the length direction of the display screen, the first target image B and/or the second target image pattern a is rotated back to the display screen and rotated to the stitching angle.
Further, in the case where the two-finger sliding input is a sliding input in which two fingers slide in opposite directions along the width direction or the length direction of the display screen, the first target image B and/or the second target image pattern a is rotated toward the display screen along the axis of the length direction of the display screen, and is rotated to the splicing angle; and under the condition that the double-finger sliding input is the sliding input that the double-finger slides back along the width direction or the length direction of the display screen, the first target image B and/or the second target image graph A are rotated back to the display screen along the axis of the width direction of the display screen, and the first target image B and/or the second target image graph A are rotated to the splicing angle.
It should be noted that, the correspondence between the two-finger sliding inputs in the different directions and the rotation direction may also be: when the double fingers slide oppositely, the first target image B and/or the second target image graph A are rotated back to the display screen; and when the double fingers slide back, the first target image B and/or the second target image graph A are rotated towards the display screen. Or when the two fingers slide oppositely, the first target image B and/or the second target image graph A are rotated along the screen along the length direction of the display screen; and when the double fingers slide back, the first target image B and/or the second target image graph A are rotated along the width direction of the display screen.
Specifically, the first sliding distance of the first sliding sub input may be a linear distance between a start point of the first sliding sub input and an end point of the second sliding sub input, and the second sliding distance of the second sliding sub input may be a linear distance between a start point of the second sliding sub input and an end point of the second sliding sub input.
As one implementation, the sliding distance of the first sliding sub-input and the second sliding sub-input may be determined according to a sum of the first sliding distance and the second sliding distance.
As another implementation, the sliding distance of the first sliding sub-input and the second sliding sub-input may be determined according to an average value of the first sliding distance and the second sliding distance.
Further, according to the corresponding relation between the preset sliding distance and the splicing angle, the splicing angle corresponding to the sliding distance of the first sliding sub-input and the second sliding sub-input can be determined, and the splicing angle between the first target image and the second target image is determined.
In this embodiment, in the process of stitching N images, the stitching angle between any two first target images and any two second target images in the N images may be adjusted, so that diversity of stitching modes is further improved, and display effects of different stitching angles are achieved.
Further, the first target image may be the i-th image, and the second target image may be the reference image determined in the above embodiment; alternatively, the first target image may be an i-th image, and the second target image may be a reference image redetermined by determining the reference image as described above; in this way, in the process of stitching the ith image and the reference image, the stitching angle of the ith image and the reference image can be adjusted, so that the diversity of screen capturing operation is further improved.
As shown in fig. 12, an embodiment of the present invention further provides a terminal 1200, including:
the receiving module 1210 is configured to receive N inputs.
And a response module 1220, configured to generate, in response to the N inputs, a stitched image formed by stitching N images corresponding to the N inputs.
The splicing parameters of the N images are associated with the input parameters input for N times, and the splicing parameters comprise at least one of splicing positions and splicing angles; n is an integer greater than 1.
Wherein the receiving module 1210 includes at least one of:
and the first receiving sub-module is used for receiving at least one sliding input of a user on a display screen of the terminal.
And a second receiving sub-module for receiving an image input of at least one head movement of the user.
Wherein, the receiving module 1210 includes:
and the third receiving submodule is used for receiving the ith input.
The response module 1220 includes:
and the first response sub-module is used for responding to the ith input and acquiring an ith image.
And the acquisition sub-module is used for acquiring the ith input parameter of the ith input.
And the determination submodule is used for determining the reference image.
And the splicing sub-module is used for splicing the ith image to the target position of the reference image based on the ith input parameter.
Wherein i is an integer greater than 1, and i is less than or equal to N.
Wherein the determining submodule includes:
a receiving unit for receiving a first input of a user;
and a response unit configured to determine one of the 1 st image to the i-1 st image as the reference image in response to the first input.
Wherein, the terminal 1200 further comprises:
and the generation module is used for generating a thumbnail of an ith image corresponding to the ith input after receiving the ith input.
And the first display module is used for displaying a splicing process of splicing the thumbnail of the ith image to the target position of the thumbnail of the reference image.
Wherein the receiving unit includes:
and the first receiving subunit is used for receiving a first input of the target thumbnail by the user.
The response unit includes:
and the first response subunit is used for responding to the first input and determining a target image corresponding to the target thumbnail as the reference image.
The target thumbnail is one of the 1 st thumbnail to the i-1 st thumbnail.
Wherein, the terminal 1200 further comprises:
and the second display module is used for displaying a splicing identifier before receiving the ith input, wherein the splicing identifier indicates at least one alternative splicing position.
The receiving unit includes:
and the second receiving subunit is used for receiving a first input of the splicing identification from a user.
The response subunit includes:
and the second response subunit is used for responding to the first input, moving the splicing mark and determining an image of the splicing mark at a position corresponding to the input ending moment of the first input as the reference image.
Wherein the determining submodule includes:
a first determining unit configured to determine an i-1 st image as the reference image.
A second determining unit configured to determine a kth image as the reference image, where the kth image is a reference image determined by one of the 2 nd input to the i-1 st input, k is an integer greater than 1, and k is less than or equal to i-1.
The ith input parameter is the input direction of the ith input;
the splicing submodule comprises:
and a third determining triplet for determining a target position based on the input direction of the ith input.
And the stitching unit is used for stitching the ith image to the target position of the reference image.
The N times of input comprise a first sliding sub-input and a second sliding sub-input which are used for selecting a first target image and a second target image; the N images comprise the first target image and the second target image;
The response module 1220 includes:
and the second response sub-module is used for responding to the first sliding sub-input and the second sliding sub-input and adjusting the splicing angle between the first target image and the second target image.
Wherein the second response submodule includes:
an acquisition unit configured to acquire a sliding direction and a sliding distance of the first sliding sub-input and the second sliding sub-input;
and the adjusting unit is used for adjusting the splicing angle between the first target image and the second target image according to the sliding direction and the sliding distance.
Wherein the adjusting unit includes:
and the fourth determination subunit is used for determining the rotation direction of the target image to be adjusted according to the sliding direction.
And a fifth determination subunit, configured to determine a rotation angle of the target image to be adjusted according to the sliding distance.
And the adjustment subunit is used for rotating the target image to be adjusted by the rotation angle according to the rotation direction.
The target image to be adjusted is at least one of the first target image and the second target image.
Wherein the first response sub-module comprises:
And the generating unit is used for executing screen capturing operation and generating a screen capturing image.
And a third determining unit configured to determine the screen capturing image as an i-th image.
Wherein the first response sub-module comprises:
and the control unit is used for controlling the camera to shoot an image.
And a fourth determining unit, configured to determine the image captured by the camera as an ith image.
The terminal 1200 in the embodiment of the present invention receives N inputs, and generates, in response to the N inputs, a stitched image formed by stitching N images corresponding to the N inputs, where a stitching parameter of the N images is associated with an input parameter of the N inputs, and the stitching parameter includes at least one of a stitching position and a stitching angle. Therefore, the N images are spliced according to the splicing parameters corresponding to the N input parameters, so that the diversity of the splicing effect is improved, the diversity adjustment and selection of the splicing positions by a user are ensured, and the diversity of the splicing operation is improved.
Fig. 13 is a schematic diagram of a hardware structure of a terminal for implementing various embodiments of the present invention.
The terminal 1300 includes, but is not limited to: radio frequency unit 1301, network module 1302, audio output unit 1303, input unit 1304, sensor 1305, display unit 1306, user input unit 1307, interface unit 1308, memory 1309, processor 1310, and power source 1311. It will be appreciated by those skilled in the art that the terminal structure shown in fig. 13 is not limiting of the terminal and that the terminal may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. In the embodiment of the invention, the terminal comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
Wherein the processor 1310 is configured to receive N inputs; responding to the N times of input, and generating a spliced image formed by splicing N images corresponding to the N times of input; the splicing parameters of the N images are associated with the input parameters input for N times, and the splicing parameters comprise at least one of splicing positions and splicing angles; n is an integer greater than 1.
The terminal 1300 in the embodiment of the present invention receives N inputs, and generates, in response to the N inputs, a stitched image formed by stitching N images corresponding to the N inputs, where stitching parameters of the N images are associated with input parameters of the N inputs, and the stitching parameters include at least one of a stitching position and a stitching angle. Therefore, the N images are spliced according to the splicing parameters corresponding to the N input parameters, so that the diversity of the splicing effect is improved, the diversity adjustment and selection of the splicing positions by a user are ensured, and the diversity of the splicing operation is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 1301 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, specifically, after receiving downlink data from the base station, processing the downlink data by the processor 1310; and, the uplink data is transmitted to the base station. Typically, the radio unit 1301 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, rf unit 1301 may also communicate with networks and other devices via a wireless communication system.
The terminal provides wireless broadband internet access to the user through the network module 1302, such as helping the user to send and receive e-mail, browse web pages, access streaming media, etc.
The audio output unit 1303 may convert audio data received by the radio frequency unit 1301 or the network module 1302 or stored in the memory 1309 into an audio signal and output as sound. Also, the audio output unit 1303 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the terminal 1300. The audio output unit 1303 includes a speaker, a buzzer, a receiver, and the like.
The input unit 1304 is used for receiving audio or video signals. The input unit 1304 may include a graphics processor (Graphics Processing Unit, GPU) 13041 and a microphone 13042, the graphics processor 13041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frame may be displayed on the display unit 1306. The image frames processed by the graphics processor 13041 may be stored in memory 1309 (or other storage medium) or transmitted via the radio frequency unit 1301 or the network module 1302. The microphone 13042 can receive sound and can process such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 1301 in the case of a telephone call mode.
Terminal 1300 also includes at least one sensor 1305, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 13061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 13061 and/or backlight when the terminal 1300 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when the accelerometer sensor is stationary, and can be used for recognizing the terminal gesture (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 1305 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 1306 is used to display information input by a user or information provided to the user. The display unit 1306 may include a display panel 13061, and the display panel 13061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 1307 may be used to receive input numerical or character information and generate key signal inputs related to user settings of the terminal and function control. Specifically, the user input unit 1307 includes a touch panel 13071 and other input devices 13072. Touch panel 13071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on touch panel 13071 or thereabout touch panel 13071 using any suitable object or accessory such as a finger, stylus, or the like). The touch panel 13071 can include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 1310, and receives and executes commands sent from the processor 1310. In addition, the touch panel 13071 can be implemented in various types of resistive, capacitive, infrared, surface acoustic wave, and the like. The user input unit 1307 may further include other input devices 13072 in addition to the touch panel 13071. In particular, other input devices 13072 may include, but are not limited to, physical keyboards, function keys (e.g., volume control keys, switch keys, etc.), trackballs, mice, joysticks, and so forth, which are not described in detail herein.
Further, the touch panel 13071 can be overlaid on the display panel 13061, and when the touch panel 13071 detects a touch operation thereon or thereabout, the touch operation is transmitted to the processor 1310 to determine a type of touch event, and the processor 1310 then provides a corresponding visual output on the display panel 13061 according to the type of touch event. Although in fig. 13, the touch panel 13071 and the display panel 13061 are two independent components for implementing the input and output functions of the terminal, in some embodiments, the touch panel 13071 and the display panel 13061 may be integrated to implement the input and output functions of the terminal, which is not limited herein.
The interface unit 1308 is an interface to which an external device is connected to the terminal 1300. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 1308 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal 1300 or may be used to transmit data between the terminal 1300 and an external device.
Memory 1309 may be used to store software programs as well as various data. The memory 1309 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 1309 may include high-speed random access memory, but may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 1310 is a control center of the terminal, and connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by running or executing software programs and/or modules stored in the memory 1309, and calling data stored in the memory 1309, thereby performing overall monitoring of the terminal. The processor 1310 may include one or more processing units; preferably, the processor 1310 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1310.
Terminal 1300 may also include a power source 1311 (e.g., a battery) for powering the various components, wherein power source 1311 may be logically connected to processor 1310 via a power management system, such as a power management system that performs functions such as managing charge, discharge, and power consumption.
In addition, the terminal 1300 includes some functional modules, which are not shown, and will not be described herein.
Preferably, the embodiment of the present invention further provides a terminal, which includes a processor 1310, a memory 1309, and a computer program stored in the memory 1309 and capable of running on the processor 1310, where the computer program when executed by the processor 1310 implements each process of the above embodiment of the image processing method, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the processes of the above-mentioned image processing method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (17)

1. An image processing method applied to a terminal, comprising:
receiving N inputs, each of the N inputs comprising an image acquisition input and an image stitching input;
responding to the N times of input, and generating a spliced image formed by splicing N images corresponding to the N times of input;
the splicing parameters of the N images are associated with the input parameters input for N times, and the splicing parameters comprise at least one of splicing positions and splicing angles; n is an integer greater than 1.
2. The image processing method of claim 1, wherein the receiving N inputs comprises at least one of:
receiving at least one sliding input of a user on a display screen of a terminal;
Or, an image input of at least one head movement of the user is received.
3. The image processing method according to claim 1 or 2, wherein the receiving N inputs includes:
receiving an ith input;
the responding to the N times of input, generating a spliced image formed by splicing N images corresponding to the N times of input, comprises the following steps:
acquiring an ith image in response to the ith input;
acquiring an ith input parameter of the ith input;
determining a reference image;
splicing the ith image to a target position of the reference image based on the ith input parameter;
wherein i is an integer greater than 1, and i is less than or equal to N.
4. The image processing method according to claim 3, wherein the determining the reference image includes:
receiving a first input of a user;
in response to the first input, one of the 1 st to i-1 st images is determined to be the reference image.
5. The image processing method according to claim 4, further comprising, after the receiving the i-th input:
generating a thumbnail of an ith image corresponding to the ith input;
and displaying a stitching process of the thumbnail of the ith image to the target position of the thumbnail of the reference image.
6. The image processing method of claim 5, wherein the receiving a first input from a user comprises:
receiving a first input of a user to a target thumbnail;
the determining, in response to the first input, one of the 1 st image to the i-1 st image as the reference image includes:
in response to the first input, determining a target image corresponding to the target thumbnail as the reference image;
the target thumbnail is one of the 1 st thumbnail to the i-1 st thumbnail.
7. The image processing method according to claim 4, further comprising, before the receiving the ith input:
displaying a splice identifier, wherein the splice identifier indicates at least one alternative splice position;
the receiving a first input from a user includes:
receiving a first input of a user to the splicing identifier;
the determining, in response to the first input, one of the 1 st image to the i-1 st image as the reference image includes:
and responding to the first input, moving the splicing mark, and determining an image of the splicing mark at a position corresponding to the input ending time of the first input as the reference image.
8. The image processing method according to claim 3, wherein the determining the reference image includes:
determining an i-1 th image as the reference image;
or determining a kth image as the reference image, wherein the kth image is the reference image determined by one of the inputs from the 2 nd input to the i-1 st input, k is an integer greater than 1, and k is less than or equal to i-1.
9. The image processing method according to claim 3, wherein the i-th input parameter is an input direction of the i-th input;
the stitching the ith image to the target position of the reference image based on the ith input parameter includes:
determining a target position based on the input direction of the ith input;
and splicing the ith image to the target position of the reference image.
10. The image processing method according to claim 1, wherein the N inputs include a first sliding sub-input and a second sliding sub-input for selecting a first target image and a second target image; the N images comprise the first target image and the second target image;
the responding to the N times of input, generating a spliced image formed by splicing N images corresponding to the N times of input, comprises the following steps:
And adjusting a stitching angle between the first target image and the second target image in response to the first sliding sub-input and the second sliding sub-input.
11. The image processing method of claim 10, wherein the adjusting the stitching angle between the first target image and the second target image in response to the first sliding sub-input and the second sliding sub-input comprises:
acquiring the sliding direction and the sliding distance of the first sliding sub-input and the second sliding sub-input;
and adjusting a splicing angle between the first target image and the second target image according to the sliding direction and the sliding distance.
12. The image processing method according to claim 11, wherein the adjusting the stitching angle between the first target image and the second target image according to the sliding direction and the sliding distance includes:
determining the rotation direction of the target image to be adjusted according to the sliding direction;
determining the rotation angle of the target image to be adjusted according to the sliding distance;
according to the rotation direction, rotating the target image to be adjusted by the rotation angle;
The target image to be adjusted is at least one of the first target image and the second target image.
13. A method according to claim 3, wherein the acquiring the ith image comprises:
performing screen capturing operation to generate a screen capturing image;
and determining the screen capturing image as an ith image.
14. A method according to claim 3, wherein the acquiring the ith image comprises:
controlling a camera to shoot an image;
and determining the image shot by the camera as an ith image.
15. A mobile terminal, comprising:
the receiving module is used for receiving N times of input, and each of the N times of input comprises an image acquisition input and an image splicing input;
the response module is used for responding to the N times of input and generating a spliced image formed by splicing N images corresponding to the N times of input;
the splicing parameters of the N images are associated with the input parameters input for N times, and the splicing parameters comprise at least one of splicing positions and splicing angles; n is an integer greater than 1.
16. A mobile terminal comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the image processing method according to any one of claims 1 to 14.
17. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the image processing method according to any one of claims 1 to 14.
CN201811553560.4A 2018-12-19 2018-12-19 Image processing method and terminal Active CN109559280B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811553560.4A CN109559280B (en) 2018-12-19 2018-12-19 Image processing method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811553560.4A CN109559280B (en) 2018-12-19 2018-12-19 Image processing method and terminal

Publications (2)

Publication Number Publication Date
CN109559280A CN109559280A (en) 2019-04-02
CN109559280B true CN109559280B (en) 2023-09-08

Family

ID=65870392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811553560.4A Active CN109559280B (en) 2018-12-19 2018-12-19 Image processing method and terminal

Country Status (1)

Country Link
CN (1) CN109559280B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022022726A1 (en) * 2020-07-31 2022-02-03 华为技术有限公司 Image capture method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742341A (en) * 2010-01-14 2010-06-16 中山大学 Method and device for image processing
WO2016015585A1 (en) * 2014-07-31 2016-02-04 维沃移动通信有限公司 Screen capture method for terminal device as well as terminal device, computer program product and computer readable recording medium of screen capture method
CN107659769A (en) * 2017-09-07 2018-02-02 维沃移动通信有限公司 A kind of image pickup method, first terminal and second terminal
CN107705251A (en) * 2017-09-21 2018-02-16 努比亚技术有限公司 Picture joining method, mobile terminal and computer-readable recording medium
CN107872623A (en) * 2017-12-22 2018-04-03 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742341A (en) * 2010-01-14 2010-06-16 中山大学 Method and device for image processing
WO2016015585A1 (en) * 2014-07-31 2016-02-04 维沃移动通信有限公司 Screen capture method for terminal device as well as terminal device, computer program product and computer readable recording medium of screen capture method
CN107659769A (en) * 2017-09-07 2018-02-02 维沃移动通信有限公司 A kind of image pickup method, first terminal and second terminal
CN107705251A (en) * 2017-09-21 2018-02-16 努比亚技术有限公司 Picture joining method, mobile terminal and computer-readable recording medium
CN107872623A (en) * 2017-12-22 2018-04-03 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于空间相关的图像拼接算法研究;许晨等;《信息系统工程》;20100920(第09期);全文 *

Also Published As

Publication number Publication date
CN109559280A (en) 2019-04-02

Similar Documents

Publication Publication Date Title
CN108668083B (en) Photographing method and terminal
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
CN108495029B (en) Photographing method and mobile terminal
CN109495711B (en) Video call processing method, sending terminal, receiving terminal and electronic equipment
CN109361869B (en) Shooting method and terminal
CN108471498B (en) Shooting preview method and terminal
CN111182205B (en) Photographing method, electronic device, and medium
CN109862267B (en) Shooting method and terminal equipment
CN109859307B (en) Image processing method and terminal equipment
CN109474787B (en) Photographing method, terminal device and storage medium
CN109683777B (en) Image processing method and terminal equipment
CN111177420B (en) Multimedia file display method, electronic equipment and medium
CN108898555B (en) Image processing method and terminal equipment
CN109102555B (en) Image editing method and terminal
CN109413333B (en) Display control method and terminal
CN109684277B (en) Image display method and terminal
CN108174109B (en) Photographing method and mobile terminal
CN108174110B (en) Photographing method and flexible screen terminal
CN110908517B (en) Image editing method, image editing device, electronic equipment and medium
CN107728923B (en) Operation processing method and mobile terminal
CN108132749B (en) Image editing method and mobile terminal
CN110536005B (en) Object display adjustment method and terminal
CN110413363B (en) Screenshot method and terminal equipment
CN110086998B (en) Shooting method and terminal
CN110955793A (en) Display control method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant