CN109769089B - Image processing method and terminal equipment - Google Patents

Image processing method and terminal equipment Download PDF

Info

Publication number
CN109769089B
CN109769089B CN201811627279.0A CN201811627279A CN109769089B CN 109769089 B CN109769089 B CN 109769089B CN 201811627279 A CN201811627279 A CN 201811627279A CN 109769089 B CN109769089 B CN 109769089B
Authority
CN
China
Prior art keywords
sub
input
image
images
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811627279.0A
Other languages
Chinese (zh)
Other versions
CN109769089A (en
Inventor
袁观智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811627279.0A priority Critical patent/CN109769089B/en
Publication of CN109769089A publication Critical patent/CN109769089A/en
Application granted granted Critical
Publication of CN109769089B publication Critical patent/CN109769089B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an image processing method and terminal equipment. The method is applied to the terminal equipment comprising at least two display screens, and comprises the following steps: receiving a first input of the N candidate images displayed on at least one first screen by a user; responding to the first input, performing image synthesis on the M sub-images, and outputting a target image; each sub-image is a partial image in one of the N candidate images; n is a positive integer, and M is a positive integer greater than 1. Therefore, the method and the device can realize the combination of a plurality of images.

Description

Image processing method and terminal equipment
Technical Field
The embodiment of the invention relates to the field of terminals, in particular to an image processing method and terminal equipment.
Background
The image processing is a process of selecting one image from the gallery and performing processing such as display and editing.
Current image processing schemes can only address a single image. Taking the image editing step as an example, the method comprises the following steps: an image is selected for display and the edit icon is clicked to enter an edit mode for editing operations such as cutting, writing, graffiti, etc. When editing a plurality of images, for example, combining images using a plurality of images requires operations such as cutting and combining using professional image editing software, which makes the operation difficult.
Disclosure of Invention
The embodiment of the invention provides an image processing method, which aims to solve the problem of high difficulty in multi-image editing operation.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, a method of image processing is provided, the method comprising:
receiving a first input of the N candidate images displayed on at least one first screen by a user;
responding to the first input, performing image synthesis on the M sub-images, and outputting a target image;
each sub-image is a partial image in one of the N candidate images; n is a positive integer, and M is a positive integer greater than 1.
In a second aspect, a terminal device is provided, which includes:
the receiving module is used for receiving first input of a user on the N candidate images displayed on the at least one first screen;
the processing module is used for responding to the first input, performing image synthesis on the M sub-images and outputting a target image;
each sub-image is a partial image in one of the N candidate images; n is a positive integer, and M is a positive integer greater than 1.
In a third aspect, a terminal device is provided, the terminal device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the method according to the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method according to the first aspect.
In the embodiment of the invention, N candidate images are displayed on a plurality of display screens of the terminal equipment, and M sub-images of the N candidate images are combined into one image based on first input of a user to the N candidate images. Compared with the scheme that only a single image can be edited in the prior art, the method can realize the synthesis of the sub-images of the multiple images only through the first input, and is convenient and fast to operate.
Drawings
FIG. 1 is a schematic diagram of an application scenario provided by the present invention;
fig. 2 is a flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 3 is a flow chart illustrating steps of a multi-view display provided by an embodiment of the present invention;
FIG. 4 is a flowchart illustrating an image browsing step according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating an image browsing method according to an embodiment of the present invention;
FIGS. 6 and 7 are schematic diagrams of image selection steps provided by an embodiment of the present invention;
FIG. 8 is a schematic view of a shear-splice interface provided by an embodiment of the present invention;
FIGS. 9 and 10 are schematic diagrams of sub-image truncation steps provided by an embodiment of the present invention;
fig. 11 is a schematic diagram of a sub-image replication scheme provided by an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a terminal device according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an image processing method, which is characterized in that N candidate images are displayed on a plurality of display screens of a terminal device, and M sub-images of the N candidate images are combined into one image based on first input of a user to the N candidate images. Compared with the scheme of only editing a single image in the prior art, the method can effectively achieve the purpose of multi-image editing.
The terminal equipment can be a PC or a mobile terminal; the mobile terminal or called mobile communication terminal refers to a computer device which can be used in moving, and broadly includes a mobile phone, a notebook, a tablet computer, a POS machine and even a vehicle-mounted computer; but most often refer to cell phones or smart phones and tablets with multiple application functions.
An application scenario of the present invention is exemplarily illustrated with reference to fig. 1.
In the application scenario, the terminal device includes a plurality of display screens, and a diagram a, a diagram B, and a diagram C are respectively displayed on the plurality of display screens.
The user can generate the input of the graph A, the graph B and the graph C by operating on the display interface; selecting images participating in the combination from the images A, B and C in response to the input by the terminal device, and combining the images participating in the combination into one image (denoted as image D); alternatively, the first and second electrodes may be,
after the images participating in the merging are selected, desired partial images are further cut out from the images participating in the merging, and the partial images are merged into one image (denoted as a map D).
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present invention, where the method is applied to a terminal device including at least two display screens, and referring to fig. 2, the method may specifically include the following steps:
step 220, receiving a first input of the user to the N candidate images displayed on the at least one first screen.
Wherein, the at least two display screens can be a plurality of physically divided (for example, folded) screens or a plurality of split screens formed after split screen processing; the at least one first screen may refer to a display screen on which the images participating in the composition are located; the first input may be set based on functions supportable by the terminal device, for example: a touch input, a fold input, etc., and in particular, the first input may also be a first operation.
Additionally, it is understood that prior to step 220, the method further comprises: the steps are shown in multiple figures.
For a terminal device of a folder type, a multi-view display step includes:
displaying N candidate images on the at least two display screens under the condition that all the display screens of the terminal equipment are in an unfolded state so as to realize multi-image display; and N is a positive integer, and each display screen displays one alternative image.
For the terminal equipment capable of splitting the screen, the multi-image display step comprises the following steps:
and when the preset operation of the interface for displaying the single picture is detected, performing split screen processing, and displaying N candidate images on a plurality of split screens to realize multi-picture display. Referring to fig. 3, specific examples may be:
step 320, receiving an interface multi-graph display input for displaying a single image.
The preset multi-graph display input may be a first touch input performed on a touch screen of the terminal device, for example: sliding the three fingers into the interface from any screen edge; it may also be a first folding input to the folding screen of the terminal device, for example: the folding screen is folded in half.
Step 340, generating a plurality of split screens and determining a plurality of images associated with the single image based on the multi-image display input.
One implementation may be:
firstly, screen splitting processing is carried out on terminal equipment to obtain a first number of split screens; then, a second number of images is selected from the list of images in which the single image is located.
The selected image may include the single image and images before and/or after the single image.
And 360, displaying the multiple images related to the single image based on the multiple split screens.
If the first number and the second number are the same, one implementation of step 360 may be:
displaying the plurality of images associated with the single image on the plurality of split screens in a one-to-one manner. Referring to fig. 1, assuming that there are 3 split screens, a single image is a diagram a, and the determined multiple images are the diagram a and a diagram B and a diagram C subsequent to the diagram a, the diagram B and the diagram C are respectively displayed on the 3 split screens.
If the first number and the second number are different, and the first number is greater than the second number, another implementation manner of step 360 may be:
displaying a plurality of pictures related to the single image on the plurality of split screens in a one-to-one manner, and displaying an image adding option on redundant split screens, so that a user can select an image by himself through the image adding option to display the image on the split screens.
The redundant split screens are split screens of undisplayed images; the image add option may be a 'plus' visual option.
Therefore, the multiple alternative images can be displayed through the multiple display screens, the purpose of displaying multiple images can be achieved, and a basis is provided for subsequent multiple image editing.
And step 240, responding to the first input, performing image synthesis on the M sub-images, and outputting a target image.
And each sub-image is a partial image in one of the N candidate images, and M is a positive integer greater than 1.
In a first implementation of step 240, the first input includes a first sub-input for selecting a sub-image, whereby M sub-images selected by the first sub-input are displayed on the second screen in response to the first sub-input. With reference to fig. 1, the implementation manner may specifically be as follows:
assuming that the user selects the image A and the image B (marked as sub images) through the first sub input, combining the image A and the image B to form an image and displaying the image on the second screen; alternatively, the first and second electrodes may be,
assuming that the user makes a first sub-input by one finger from left to right across the entire screen, the diagram a, the diagram B, and the diagram C are combined to form one image and displayed on the second screen.
The first sub-input can be sliding input on a touch screen, click input in a preset form, long-time press input, or input of a physical key of the terminal device; the second screen may be any one of the at least two display screens, or may be a screen formed by at least two display screens.
Therefore, the partial images are selected from the multiple images through the first sub-input and displayed on the second screen, so that the multiple complete images are combined.
In the second implementation manner of step 240, the first input includes a second sub-input for updating the alternative images, so that after the multiple images are displayed, the display of the alternative images on the at least two display screens can be updated in response to the second sub-input, so as to implement browsing of multiple images displayed on the multiple screens, improve image browsing efficiency, and provide support for merging of the multiple images.
Wherein the second sub-input is a slide input on a screen of the folder type terminal; or, the second sub-input is a bending input to the folder terminal, and the bending input is used for controlling a target screen area of the folder terminal to bend. If the image is the latter, referring to fig. 4, the image browsing step may specifically be as follows:
and step 420, determining a target screen area and a bending direction corresponding to the bending input.
Wherein, the target screen area refers to an area of a folding screen for bending input (operation); the bending direction refers to a direction relative to a plane where the folding screen is located caused by a bending input, with the plane as a reference, for example: an upward fold angle, a downward fold angle.
Step 440, determining an image updating sequence and an image updating number based on the target screen area and the bending direction.
The bent screen areas are different, and the image updating quantity is different; the bending directions are different, and the image updating sequence is different; for example: and when the upper right corner of the folding screen is folded inwards, all the images displayed on the multiple screens are moved to the left by one position.
Step 460, updating the display of the alternative images on the at least two display screens based on the image updating sequence and the image updating number.
Specifically, the steps 420 to 460 may be exemplified by:
example 1, based on the sliding distance of the left (right) swipe operation in the screen, a left (right) shift of a corresponding number of images out of 3 images is achieved; the longer the sliding distance, the greater the number of images translated;
example 2, by folding the upper right corner of the screen inward once (see fig. 5), an overall left shift of 3 images is achieved once (replacing 1 image); or, the upper left corner of the folding screen is folded inwards once, so that the 3 images are integrally moved to the right once (1 image is replaced).
The folding position and the corresponding function thereof can be flexibly configured by the user based on the operation habit of the user, for example: the folded lower left corner can also be configured to replace 1 image to the left, and the like.
Example 3, by folding the upper right corner of the screen outward once, the 3 images are turned over backward as a whole (3 images are replaced); or, the upper left corner of the folding screen is folded outwards once, so that the 3 images are turned forwards integrally (3 images are replaced).
Furthermore, the user can select the image on a certain display screen in the image browsing process so as to be free from the influence of image updating, and continue browsing by updating the images on other display screens; accordingly, the implementation manner of step 460 may specifically be:
and determining the images in the unselected state in the plurality of images, and based on the image updating sequence and the image updating quantity, adjusting the images in the unselected state to move left by one image bit, move left by all image bits, move right by one image bit, move right by all image bits and the like so as to update the images in the unselected state displayed on at least two display screens, but not update the images in the selected state. Therefore, the operability and the interestingness of multi-image browsing can be increased.
The implementation manner of the image selecting step may be:
the first input further comprises a third sub-input for triggering display of an image marquee and a fourth sub-input for adjusting a display position of the image marquee, whereby the image marquee may be displayed in response to the third sub-input after the multi-map display; and in response to the fourth sub-input, determining each image framed by the image selection frame as a sub-image. Further, after the user selects the image, the first implementation of step 240 may be used to merge the selected sub-images. The third sub-input can be a preset touch operation; the image selection frame can be displayed at the touch occurrence position or any position of the screen and used for selecting an image from the plurality of candidate images so as to enable the image to be in a selected state.
Referring to fig. 6 to 7, the image selecting step may specifically be as follows:
the user slides out an image pick-up box 61 from the leftmost side of the screen with a double finger, places it on the I-screen, and a halo appears around the I-screen indicating that the image in the screen is selected (see fig. 7). Preferably, there is a maximum of split screen count minus 1 image marquee by default. The double-finger dragging in the I screen can drag the 'selection frame' to other split screens to select other images, and meanwhile, the images in the original split screens are in a deselected state. Continuing to double-finger drag the "selection box" to any screen edge, the selection box disappears.
Further, when the image is in the selected state, the image is not changed by the image browsing operation of step 704. That is, the normal display of the image is not affected by the movement or page-turning operation in other split screens.
Therefore, the image selecting frame is displayed through the third sub-input, the required complete image is selected from the multiple images through the fourth sub-input operation image selecting frame, and the complete image is used as the sub-image to participate in image merging, so that the aim of improving the multi-image selecting efficiency is fulfilled, and support is provided for multi-image merging.
In a third implementation manner of step 240, if the first input includes a fifth sub-input to an nth candidate image, where the nth candidate image is any one of the N candidate images, referring to fig. 8, before image synthesizing M sub-images, the method further includes:
and step S1, opening a 'cutting and splicing mode' by folding one corner of the screen at the lower right corner 81 of the folding screen, and entering a 'cutting and splicing interface'.
In the 'cutting and splicing interface', an I, III screen displays two images of a selected image A and an image B as a cutting material by default; the II screen displays the final spliced image and is replaced by the image X; the circular icon 82 in the right margin is used to control the associated operations of the cut.
The user can switch images by sliding on the screen I, III; in this case, the image in the I screen is changed and the image in the II screen is kept unchanged, and when the image to be stitched is confirmed, the image is selected.
Step S2, in response to the fifth sub-input, dividing the nth candidate image into X × Y sub-regions.
And step S3, receiving a fifth input of the user to K sub-regions in the X X Y sub-regions.
And step S4, determining the images of the K sub-regions as the sub-images of the nth candidate image.
Assuming that the image a is the nth candidate image, referring to fig. 8 to 10, steps S2 to S4 may specifically be exemplified as:
the user clicks on select graph a, clicks on the right circular control 82 to divide graph a into X Y small squares (i.e., small regions); based on the method, the user can roughly select the required material main body area, and the intercepted area is saved by folding the lower right corner 81 of the folding screen; the I screen displays the intercepted area in a full screen mode; furthermore, the previous operations can be repeated for a plurality of times, and the intercepting area is further divided and selected until the required material (marked as a sub-image a) is intercepted.
Similarly, the required sub-images can be cut out from the other alternative images in the same way.
Further, the first input further comprises a sixth sub-input for triggering the copying of the sub-images and a seventh sub-input for triggering the image composition, whereby the images of the K sub-regions may be copied to the target screen for display in response to the sixth sub-input for the sub-images selected in the steps S1-S3; and responding to the seventh sub-input, synthesizing the images of the T sub-areas in the target screen, and outputting a target image.
The target screen may be any one of at least two screens, or may be a display screen where an image not participating in merging is located, preferably the latter, so that a sub-image can be displayed in the target screen, and a complete image of the sub-image can be displayed in the original display screen.
In connection with the corresponding examples of fig. 8-10, the implementation of the image copying and synthesizing step may be exemplified as:
referring to fig. 11, by folding the screen where the diagram a is located, the diagram a is copied to the middle II screen for display; in the same way, other sub-images can be copied to the II screen, and different sub-images are defaulted to be in different image layers, so that the relative position between the sub-images can be adjusted, and an image required by a user can be obtained; the "truncated mosaic mode" can then be exited by releasing the folded portion of the folded screen at the lower right corner 81.
Therefore, on the basis of the first implementation mode or the second implementation mode, the required partial images are further cut out from the complete image and are merged as the sub-images, and therefore the purpose of improving the merging precision of the multiple images is achieved; moreover, the realization mode can further improve the multi-image merging precision by copying each sub-image to the same screen for position adjustment and merging; in addition, the purpose of combining multiple images can be achieved through folding operation and/or touch operation of the folding screen, and the folding screen has the advantages of being simple and convenient to operate and high in interestingness.
Based on the above, in this embodiment, N candidate images are displayed on a plurality of display screens of the terminal device, and M sub-images of the N candidate images are merged into one image based on a first input of the user to the N candidate images. Compared with the scheme that only a single image can be edited in the prior art, the purpose of merging multiple images can be effectively realized.
In addition, for simplicity of explanation, the above-described method embodiments are described as a series of acts or combinations, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts or steps described, as some steps may be performed in other orders or simultaneously according to the present invention. Furthermore, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present invention, and referring to fig. 12, the terminal device 120 may specifically include: a receiving module 121 and a processing module 122, wherein:
a receiving module 121, configured to receive a first input of the N candidate images displayed on the at least one first screen from the user;
a processing module 122, configured to perform image synthesis on the M sub-images in response to the first input, and output a target image;
each sub-image is a partial image in one of the N candidate images; n is a positive integer, and M is a positive integer greater than 1.
Optionally, the first input includes a first sub-input for selecting a sub-image, and the processing module 122 is specifically configured to:
and responding to the first sub-input, and displaying M sub-images selected by the first sub-input on a second screen.
Optionally, the terminal device is a foldable terminal, and the apparatus further includes:
the multi-image display module is used for displaying N standby images on the at least two display screens under the condition that all the display screens of the terminal equipment are in an unfolded state; wherein, a standby image is respectively displayed on each display screen.
Optionally, the first input includes a second sub-input for updating the alternative image, and the processing module 122 is specifically configured to:
updating the display of the alternative images on the at least two display screens in response to the second sub-input.
Optionally, the second sub-input is a sliding input on a screen of the folder terminal; or, the second sub-input is a bending input to the folder terminal, and the bending input is used for controlling a target screen area of the folder terminal to bend.
Optionally, if the second sub-input is a bending input to the foldable terminal, the processing module 122 is specifically configured to:
determining a target screen area and a bending direction corresponding to the bending input; determining an image update sequence and an image update number based on the target screen area and the bending direction; updating the display of the alternative images on the at least two display screens based on the image update sequence and the image update number.
Optionally, the first input further includes a third sub-input for triggering display of the image selection box, and the processing module 122 is specifically configured to: in response to the third sub-input, displaying an image culling box.
Optionally, the first input further includes a fourth sub-input for adjusting a display position of the image selection frame, and the processing module 122 is specifically configured to:
and in response to the fourth sub-input, determining each image framed by the image selection frame as a sub-image.
Optionally, the first input includes a fifth sub-input to an nth candidate image, where the nth candidate image is any one of the N candidate images, and the processing module 122 is specifically configured to:
in response to the fifth sub-input, dividing the nth candidate image into X Y sub-regions; receiving a fifth input of the user to K sub-regions of the X Y sub-regions; and determining the images of the K sub-regions as sub-images of the nth candidate image.
Optionally, the first input further includes a sixth sub-input for triggering sub-image copying and a seventh sub-input for triggering image synthesis, and the processing module 122 is specifically configured to:
in response to the sixth sub-input, copying the images of the K sub-regions into a target screen for display; and responding to the seventh sub-input, synthesizing the images of the T sub-areas in the target screen, and outputting a target image.
As can be seen, in the present embodiment, N candidate images are displayed on a plurality of display screens of the terminal device, and based on a first input of the N candidate images by a user, M sub-images of the N images are merged into one image. Compared with the scheme that only a single image can be edited in the prior art, the method can realize the synthesis of the sub-images of the multiple images only through the first input, and is convenient and fast to operate.
In addition, as for the device embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to part of the description of the method embodiment. Further, it should be noted that, among the respective components of the apparatus of the present invention, the components thereof are logically divided according to the functions to be realized, but the present invention is not limited thereto, and the respective components may be newly divided or combined as necessary.
Fig. 13 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, and referring to fig. 13, the terminal device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 13 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 110 is configured to receive a first input of a user to the N candidate images displayed on the at least one first screen;
responding to the first input, performing image synthesis on the M sub-images, and outputting a target image;
each sub-image is a partial image in one of the N candidate images; n is a positive integer, and M is a positive integer greater than 1.
In this embodiment, when the terminal device is in an interface displaying a single image, the multi-image display operation performed on the interface is monitored, so as to generate a plurality of split screens based on the multi-image display operation and determine a plurality of images associated with the single image, and to display and edit the plurality of images based on the plurality of split screens. Furthermore, based on a first input of the user to display the N candidate images on the plurality of display screens, the M sub-images of the N images are merged into one image. Compared with the scheme of only editing a single image in the prior art, the method can effectively achieve the purpose of multi-image editing.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 13, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 110, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements each process of the above-mentioned image processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (9)

1. An image processing method applied to a terminal device comprising at least two display screens is characterized by comprising the following steps:
receiving a first input of the N candidate images displayed on at least one first screen by a user;
responding to the first input, performing image synthesis on the M sub-images, and outputting a target image;
each sub-image is a partial image in one of the N candidate images; n is a positive integer, M is a positive integer greater than 1;
wherein the first input comprises a first sub-input for selecting a sub-image;
after the receiving of the first input of the user to the N candidate images displayed on the at least one first screen, the method further includes:
responding to the first sub-input, and displaying M sub-images selected by the first sub-input on a second screen;
wherein the terminal equipment is a folding terminal;
before the receiving of the first input of the user to the N candidate images displayed on the at least one first screen, the method further includes:
displaying N standby images on the at least two display screens when all the display screens of the terminal equipment are in an unfolded state;
wherein the first input further comprises a third sub-input for triggering display of an image marquee;
after the N candidate images are displayed on the at least two display screens, the method further includes:
displaying an image culling box in response to the third sub-input;
the first input comprises a fifth sub-input of an nth alternative image, and the nth alternative image is any one of the N alternative images;
before image composition is performed on the M sub-images, the method further includes:
in response to the fifth sub-input, dividing the nth candidate image into X Y sub-regions;
receiving a fifth input of the user to K sub-regions of the X Y sub-regions;
and determining the images of the K sub-regions as sub-images of the nth candidate image.
2. The method of claim 1,
wherein, a standby image is respectively displayed on each display screen.
3. The method of claim 2, wherein the first input comprises a second sub-input for updating an alternative image;
after the displaying of the N candidate images on the at least two display screens, the method further includes:
updating the display of the alternative images on the at least two display screens in response to the second sub-input.
4. The method according to claim 3, wherein the second sub-input is a slide input on a screen of a folder type terminal;
or, the second sub-input is a bending input to the folder terminal, and the bending input is used for controlling a target screen area of the folder terminal to bend.
5. The method of claim 4, wherein the second sub-input is a meander input to a folded terminal;
the updating the display of the alternative images on the at least two display screens in response to the second sub-input comprises:
determining a target screen area and a bending direction corresponding to the bending input;
determining an image update sequence and an image update number based on the target screen area and the bending direction;
updating the display of the alternative images on the at least two display screens based on the image update sequence and the image update number.
6. The method of claim 1, wherein the first input further comprises a fourth sub-input to adjust a display position of the image marquee;
after the image marquee is displayed, the method further comprises the following steps:
and in response to the fourth sub-input, determining each image framed by the image selection frame as a sub-image.
7. The method of claim 1, wherein the first input further comprises a sixth sub-input for triggering sub-image copying and a seventh sub-input for triggering image compositing;
after the determining the images of the K sub-regions as the sub-images, the method further includes:
in response to the sixth sub-input, copying the images of the K sub-regions into a target screen for display;
the image synthesizing the M sub-images in response to the first input and outputting a target image, comprising:
and responding to the seventh sub-input, synthesizing the images of the T sub-areas in the target screen, and outputting a target image.
8. A terminal device, comprising:
the receiving module is used for receiving first input of a user on the N candidate images displayed on the at least one first screen;
the processing module is used for responding to the first input, performing image synthesis on the M sub-images and outputting a target image;
each sub-image is a partial image in one of the N candidate images; n is a positive integer, M is a positive integer greater than 1;
wherein the first input comprises a first sub-input for selecting a sub-image, the processing module is configured to:
responding to the first sub-input, and displaying M sub-images selected by the first sub-input on a second screen;
wherein, the terminal equipment is a folding terminal, and the terminal equipment further comprises:
the multi-image display module is used for displaying N standby images on the at least two display screens under the condition that all the display screens of the terminal equipment are in an unfolded state;
wherein the first input further comprises a third sub-input for triggering display of the image selection box, and the processing module is specifically configured to: displaying an image culling box in response to the third sub-input;
wherein the first input includes a fifth sub-input to an nth candidate image, and the nth candidate image is any one of the N candidate images, and the processing module is specifically configured to:
in response to the fifth sub-input, dividing the nth candidate image into X Y sub-regions; receiving a fifth input of the user to K sub-regions of the X Y sub-regions; and determining the images of the K sub-regions as sub-images of the nth candidate image.
9. A terminal device, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201811627279.0A 2018-12-28 2018-12-28 Image processing method and terminal equipment Active CN109769089B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811627279.0A CN109769089B (en) 2018-12-28 2018-12-28 Image processing method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811627279.0A CN109769089B (en) 2018-12-28 2018-12-28 Image processing method and terminal equipment

Publications (2)

Publication Number Publication Date
CN109769089A CN109769089A (en) 2019-05-17
CN109769089B true CN109769089B (en) 2021-03-16

Family

ID=66452193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811627279.0A Active CN109769089B (en) 2018-12-28 2018-12-28 Image processing method and terminal equipment

Country Status (1)

Country Link
CN (1) CN109769089B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110933300B (en) * 2019-11-18 2021-06-22 深圳传音控股股份有限公司 Image processing method and electronic terminal equipment
CN111443855B (en) * 2020-04-07 2021-07-16 维沃移动通信有限公司 Image processing method and electronic equipment
CN112965681B (en) * 2021-03-30 2022-12-23 维沃移动通信有限公司 Image processing method, device, equipment and storage medium
CN114339029B (en) * 2021-11-23 2024-04-23 维沃移动通信有限公司 Shooting method and device and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006020105A (en) * 2004-07-02 2006-01-19 Casio Comput Co Ltd Imaging device, slot picture photographing method and program
CN104333699A (en) * 2014-11-25 2015-02-04 广州视源电子科技股份有限公司 Synthetic method and device of user-defined photographing area
CN105117113A (en) * 2015-07-18 2015-12-02 西安电子科技大学 Electronic display apparatus
CN107589903A (en) * 2017-10-19 2018-01-16 广东欧珀移动通信有限公司 The method and apparatus for showing more page number displaying information
CN108093171A (en) * 2017-11-30 2018-05-29 努比亚技术有限公司 A kind of photographic method, terminal and computer readable storage medium
CN108469898A (en) * 2018-03-15 2018-08-31 维沃移动通信有限公司 A kind of image processing method and flexible screen terminal
CN108881742A (en) * 2018-06-28 2018-11-23 维沃移动通信有限公司 A kind of video generation method and terminal device
CN108898555A (en) * 2018-07-27 2018-11-27 维沃移动通信有限公司 A kind of image processing method and terminal device
CN108965710A (en) * 2018-07-26 2018-12-07 努比亚技术有限公司 Method, photo taking, device and computer readable storage medium
CN109062483A (en) * 2018-07-27 2018-12-21 维沃移动通信有限公司 A kind of image processing method and terminal device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006020105A (en) * 2004-07-02 2006-01-19 Casio Comput Co Ltd Imaging device, slot picture photographing method and program
CN104333699A (en) * 2014-11-25 2015-02-04 广州视源电子科技股份有限公司 Synthetic method and device of user-defined photographing area
CN105117113A (en) * 2015-07-18 2015-12-02 西安电子科技大学 Electronic display apparatus
CN107589903A (en) * 2017-10-19 2018-01-16 广东欧珀移动通信有限公司 The method and apparatus for showing more page number displaying information
CN108093171A (en) * 2017-11-30 2018-05-29 努比亚技术有限公司 A kind of photographic method, terminal and computer readable storage medium
CN108469898A (en) * 2018-03-15 2018-08-31 维沃移动通信有限公司 A kind of image processing method and flexible screen terminal
CN108881742A (en) * 2018-06-28 2018-11-23 维沃移动通信有限公司 A kind of video generation method and terminal device
CN108965710A (en) * 2018-07-26 2018-12-07 努比亚技术有限公司 Method, photo taking, device and computer readable storage medium
CN108898555A (en) * 2018-07-27 2018-11-27 维沃移动通信有限公司 A kind of image processing method and terminal device
CN109062483A (en) * 2018-07-27 2018-12-21 维沃移动通信有限公司 A kind of image processing method and terminal device

Also Published As

Publication number Publication date
CN109769089A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
JP7359920B2 (en) Image processing method and flexible screen terminal
CN109769089B (en) Image processing method and terminal equipment
CN108495029B (en) Photographing method and mobile terminal
WO2020220991A1 (en) Screen capture method, terminal device and computer-readable storage medium
CN109725683B (en) Program display control method and folding screen terminal
CN109739407B (en) Information processing method and terminal equipment
WO2021082716A1 (en) Information processing method and electronic device
CN111638837B (en) Message processing method and electronic equipment
CN111104029B (en) Shortcut identifier generation method, electronic device and medium
CN109683802B (en) Icon moving method and terminal
CN108898555B (en) Image processing method and terminal equipment
CN108646960B (en) File processing method and flexible screen terminal
CN111064848B (en) Picture display method and electronic equipment
EP3731506A1 (en) Image display method and mobile terminal
CN108108079B (en) Icon display processing method and mobile terminal
CN110908554B (en) Long screenshot method and terminal device
WO2020215969A1 (en) Content input method and terminal device
CN108804628B (en) Picture display method and terminal
CN110968229A (en) Wallpaper setting method and electronic equipment
US20220351330A1 (en) Image cropping method and electronic device
CN110737375A (en) display methods and terminals
CN111176526B (en) Picture display method and electronic equipment
CN111596990A (en) Picture display method and device
WO2020238496A1 (en) Icon management method and terminal device
CN110928619B (en) Wallpaper setting method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant