CN109842722B - Image processing method and terminal equipment - Google Patents

Image processing method and terminal equipment Download PDF

Info

Publication number
CN109842722B
CN109842722B CN201811595054.1A CN201811595054A CN109842722B CN 109842722 B CN109842722 B CN 109842722B CN 201811595054 A CN201811595054 A CN 201811595054A CN 109842722 B CN109842722 B CN 109842722B
Authority
CN
China
Prior art keywords
image
screen
input
terminal device
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811595054.1A
Other languages
Chinese (zh)
Other versions
CN109842722A (en
Inventor
马成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811595054.1A priority Critical patent/CN109842722B/en
Publication of CN109842722A publication Critical patent/CN109842722A/en
Application granted granted Critical
Publication of CN109842722B publication Critical patent/CN109842722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the invention discloses an image processing method and terminal equipment, relates to the technical field of communication, and can solve the problems that when a user draws certain symmetrical images through the terminal equipment, the operation of the user is complicated and time-consuming. The specific scheme is as follows: receiving a first input of a user in a case where a first image is displayed on a first screen; in response to a first input, mirroring a first image displayed on a first screen onto a second screen, generating and displaying a second image on the second screen; receiving a second input of the user; outputting a target image based on the first image and the second image in response to a second input, the target image including a third image and a fourth image; the third image is a partial image content of the first image or an entire image content of the first image, and the fourth image is a partial image content of the second image or an entire image content of the second image. The embodiment of the invention is applied to the process of outputting the target image based on the first image and the second image.

Description

Image processing method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image processing method and terminal equipment.
Background
With the development of terminal technology, the functions of terminal equipment are more and more diversified. For example, the user may draw a picture (e.g., drawing a paper cut, drawing a portrait of a person, etc.) through the terminal device.
Generally, when a user uses some applications of a terminal device to draw, for some symmetrical images, the user needs to draw twice in the terminal device, and then trigger the terminal device to merge the two drawn images to obtain a final image. Thus, the operation of the user is cumbersome and time-consuming.
Disclosure of Invention
The embodiment of the invention provides an image processing method and terminal equipment, which can solve the problems that when a user draws certain symmetrical images through the terminal equipment, the operation of the user is complicated and time-consuming.
In order to solve the technical problem, the embodiment of the invention adopts the following technical scheme:
in a first aspect of the embodiments of the present invention, an image processing method is provided, where the image processing method is applied to a terminal device, where the terminal device may include a first screen and a second screen, and the image processing method may include: receiving a first input of a user in a case where a first image is displayed on a first screen; in response to a first input, mirroring a first image displayed on a first screen onto a second screen, generating and displaying a second image on the second screen; receiving a second input of the user; outputting a target image based on the first image and the second image in response to a second input, the target image including a third image and a fourth image; the third image is a partial image content of the first image or an entire image content of the first image, and the fourth image is a partial image content of the second image or an entire image content of the second image.
In a second aspect of the embodiments of the present invention, a terminal device is provided, where the terminal device may include a first screen and a second screen, and the terminal device may include: the device comprises a receiving unit, a processing unit and an output unit. The receiving unit is used for receiving a first input of a user under the condition that a first image is displayed on a first screen. And the processing unit is used for responding to the first input received by the receiving unit, mirroring the first image displayed on the first screen to the second screen, generating a second image and displaying the second image on the second screen. And the receiving unit is also used for receiving a second input of the user. An output unit configured to output a target image based on the first image and the second image in response to the second input received by the receiving unit, the target image including a third image and a fourth image; the third image is a partial image content of the first image or an entire image content of the first image, and the fourth image is a partial image content of the second image or an entire image content of the second image.
In a third aspect of the embodiments of the present invention, a terminal device is provided, where the terminal device includes a processor, a memory, and a computer program stored in the memory and being executable on the processor, and the computer program, when executed by the processor, implements the steps of the image processing method according to the first aspect.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the image processing method according to the first aspect.
In the embodiment of the present invention, in a case where the first image is displayed on the first screen, the terminal device may mirror the first image displayed on the first screen onto the second screen according to a first input of a user to display the second image on the second screen, and then display a target image (the target image including a third image and a fourth image, the third image being a partial image content of the first image or an entire image content of the first image, the fourth image being a partial image content of the second image or an entire image content of the second image) on the target screen according to a second input of the user. The target image displayed on the target screen by the terminal device comprises a third image and a fourth image, the third image is partial image content of the first image or all image content of the first image, the fourth image is partial image content of the second image or all image content of the second image, and the second image is obtained after the terminal device mirrors the first image displayed on the first screen and is not obtained after the terminal device draws the second image on the terminal device, so that the operation of a user can be simplified, and the time consumption of the operation of the user can be saved.
Drawings
Fig. 1 is a schematic structural diagram of an android operating system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an image processing method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an example of an interface of a mobile phone according to an embodiment of the present invention;
fig. 4 is a second schematic diagram of an image processing method according to an embodiment of the present invention;
fig. 5 is a second schematic diagram of an example of an interface of a mobile phone according to an embodiment of the present invention;
fig. 6 is a third schematic diagram of an example of an interface of a mobile phone according to the embodiment of the present invention;
fig. 7 is a fourth schematic view of an example of an interface of a mobile phone according to an embodiment of the present invention;
fig. 8 is a fifth schematic view of an example of an interface of a mobile phone according to an embodiment of the present invention;
fig. 9 is a sixth schematic view of an example of an interface of a mobile phone according to an embodiment of the present invention;
fig. 10 is a third schematic diagram of an image processing method according to an embodiment of the present invention;
fig. 11 is a seventh schematic diagram of an example of an interface of a mobile phone according to an embodiment of the present invention;
fig. 12 is an eighth schematic diagram of an example of an interface of a mobile phone according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 14 is a second schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 15 is a third schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 16 is a fourth schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 17 is a hardware schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first" and "second," and the like, in the description and in the claims of embodiments of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first input and the second input, etc. are for distinguishing different inputs, rather than for describing a particular order of inputs.
In the description of the embodiments of the present invention, the meaning of "a plurality" means two or more unless otherwise specified. For example, a plurality of elements refers to two elements or more.
The term "and/or" herein is an association relationship describing an associated object, and means that there may be three relationships, for example, a display panel and/or a backlight, which may mean: there are three cases of a display panel alone, a display panel and a backlight at the same time, and a backlight alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, input/output denotes input or output.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The embodiment of the invention provides an image processing method and a terminal device, wherein in the case that a first image is displayed on a first screen, the terminal device can mirror the first image displayed on the first screen onto a second screen according to a first input of a user so as to display the second image on the second screen, and then display a target image (the target image comprises a third image and a fourth image, the third image is partial image content of the first image or all image content of the first image, and the fourth image is partial image content of the second image or all image content of the second image) on a target screen according to a second input of the user. The target image displayed on the target screen by the terminal device comprises a third image and a fourth image, the third image is partial image content of the first image or all image content of the first image, the fourth image is partial image content of the second image or all image content of the second image, and the second image is obtained after the terminal device mirrors the first image displayed on the first screen and is not obtained after the terminal device draws the second image on the terminal device, so that the operation of a user can be simplified, and the time consumption of the operation of the user can be saved.
The image processing method and the terminal device provided by the embodiment of the invention can be applied to the process of outputting the target image based on the first image and the second image. Specifically, the method can be applied to a process of outputting the target image based on the first image and the second image after the second image is obtained in a mirror image manner.
The terminal device in the embodiment of the present invention may be a terminal device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the image processing method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the image processing method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the image processing method may operate based on the android operating system shown in fig. 1. Namely, the processor or the terminal device can implement the image processing method provided by the embodiment of the invention by running the software program in the android operating system.
An image processing method and a terminal device provided by the embodiments of the present invention are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
It should be noted that, in the embodiment of the present invention, the terminal device may include a plurality of screens, and in the following embodiment, the terminal device includes two screens (for example, a first screen and a second screen), which is taken as an example to exemplarily describe the image processing method provided in the embodiment of the present invention, and in a case where the terminal device includes at least two screens, the image processing method provided in the embodiment of the present invention may also be executed.
At present, in the prior art, when a user uses some application programs of a terminal device to draw, for some symmetrical images, the user needs to draw twice in the terminal device, and then trigger the terminal device to merge the two drawn images to obtain a final image. Thus, the operation of the user is cumbersome and time-consuming.
In order to solve the above technical problem, in an image processing method provided by an embodiment of the present invention, fig. 2 shows a flowchart of an image processing method provided by an embodiment of the present invention, and the method can be applied to a terminal device having an android operating system as shown in fig. 1. As shown in fig. 2, the image processing method provided by the embodiment of the present invention may include steps 201 to 204 described below.
Step 201, in the case that a first image is displayed on a first screen, a terminal device receives a first input of a user.
In the embodiment of the invention, the terminal equipment can comprise a first screen and a second screen.
In the embodiment of the present invention, the user may trigger the terminal device to start the intelligent jigsaw drawing function in the terminal device, and input (for example, sixth input) on the first screen for the first image, so as to trigger the terminal device to display the first image on the first screen.
Optionally, in the embodiment of the present invention, a user may draw a first image on a first screen to trigger the terminal device to display the first image on the first screen; or, the user may select the first image from the terminal device to trigger the terminal device to display the first image on the first screen.
Optionally, in the embodiment of the present invention, a user may trigger the terminal device to display the first image in the interface of the target application program (for example, draw the first image in the interface of the drawing-type application program) when the first screen of the terminal device displays the interface of the target application program.
Optionally, in the embodiment of the present invention, after the user triggers the terminal device to start the intelligent jigsaw drawing function, the terminal device may display a prompt message on the first screen or the second screen to prompt the user to perform the first input to the terminal device.
In the embodiment of the present invention, the first input may be used to trigger the terminal device to display an image corresponding to the first image on the second screen.
Optionally, in the embodiment of the present invention, the first input may be a folding input to the terminal device.
It should be noted that the first input may be a folding input of the second screen by the user, so that the second screen rotates around an axis between the first screen and the second screen, and an included angle between the first screen and the second screen is within a first angle range. It can be understood that the angle between the first screen and the second screen of the terminal device changes with the folding input of the user.
Illustratively, taking a terminal device as a mobile phone as an example, as shown in fig. 3, the mobile phone includes a first screen 10 and a second screen 11, a first image 12 is displayed on the first screen 10, and a user can perform a first input on the second screen 11 to rotate the second screen 11 around an axis 13 between the first screen 10 and the second screen 11, so that an included angle between the first screen 10 and the second screen 11 is within a first angle range (the included angle between the first screen 10 and the second screen 11 is illustrated as 30 ° in fig. 3).
Step 202, the terminal device responds to the first input, mirrors the first image displayed on the first screen to the second screen, generates a second image and displays the second image on the second screen.
It will be appreciated that the second image is a mirror image of the first image.
It should be noted that the mirror image can be understood as: the arrangement of the portions of the mirror image is exactly opposite to that of the image as the model (i.e. the mirrored image), i.e. the mirror image is the inverted image of the mirrored image with respect to an axis or plane intersecting it.
Alternatively, in the embodiment of the present invention, the step 202 may be specifically implemented by the following step 202 a.
Step 202a, the terminal device responds to the first input, and under the condition that an included angle between the first screen and the second screen is within a first angle range, a first image displayed on the first screen is mirrored onto the second screen, a second image is generated, and the second image is displayed on the second screen.
It can be understood that after receiving the first input of the user, the terminal device may obtain a current included angle between the first screen and the second screen, and then determine whether the included angle is within the first angle range.
Optionally, in this embodiment of the present invention, the first angle range may be default for a system of the terminal device or predefined by a user.
Optionally, in an embodiment of the present invention, the first angle range may be an open range (0, 180 °).
In the embodiment of the invention, when the included angle between the first screen and the second screen is within the first angle range, the terminal device can display the second image corresponding to the first image on the second screen in a mirror image mode without drawing the second image again by a user, so that the drawing operation of the user can be simplified.
Optionally, in the embodiment of the present invention, as shown in fig. 4 in combination with fig. 2, before the step 202, the image processing method provided in the embodiment of the present invention may further include the following step 301, and the step 202 may be specifically implemented by the following step 202 b.
Step 301, the terminal device obtains position information of each pixel point in the first image.
Optionally, in the embodiment of the present invention, the terminal device may obtain, by combining with the first coordinate system in the terminal device, the position information of each pixel point in the first image (that is, the coordinate of each pixel point in the first coordinate system).
Optionally, in this embodiment of the present invention, the first coordinate system may be a system default of the terminal device or predefined by a user.
Optionally, in the embodiment of the present invention, before the step 201, the image processing method provided in the embodiment of the present invention may further include the following step 205 and step 206.
And step 205, the terminal device receives a third input of the user.
In the embodiment of the present invention, the third input is a selection input of the first image in the terminal device by the user, or the third input is an input of the first image drawn on the first screen by the user.
Optionally, in this embodiment of the present invention, the user may select the first image from a plurality of images saved in the terminal device, or the user may draw the first image in a target application on the first screen (for example, draw the first image in a drawing-type application).
And step 206, the terminal device responds to the third input, displays the first image on the first screen, and records the position information of each pixel point in the first image.
Optionally, in the embodiment of the present invention, after the user selects and inputs the first image in the terminal device, the terminal device may obtain the location information of each pixel point in the first image by identifying the first image.
Optionally, in the embodiment of the present invention, when the user draws the first image on the first screen, the terminal device may obtain and record the position information of each pixel point in the first image in real time.
Step 202b, the terminal device responds to the first input, for each pixel point in the first image, according to the position information of the first target pixel point in the first image, the pixel point information of the first target pixel point located at the first position in the first screen is mapped to the second target pixel point located at the second position on the second screen, and a second image is generated and displayed on the second screen.
In an embodiment of the present invention, the second position is a position in the second screen corresponding to the first position.
It can be understood that the first target pixel point is any one pixel point in the first image; the second target pixel point is a pixel point on the second screen corresponding to the first target pixel point in the first image.
Optionally, in the embodiment of the present invention, the preset algorithm may be a system default of the terminal device or predefined by the user.
Optionally, in the embodiment of the present invention, the terminal device may vertically map the pixel point information of the first target pixel point in the first image to the second position on the second screen, so as to obtain the second target pixel point.
Optionally, in this embodiment of the present invention, the terminal device may increase or decrease the abscissa in the coordinate of the first target pixel by the first threshold, and/or increase or decrease the ordinate in the coordinate of the first target pixel by the second threshold, so as to determine the second target pixel. It should be noted that the first threshold and the second threshold may be the same or may be different.
For example, three pixels in the first image 12 displayed on the first screen 10 are taken as an example for explanation. In combination with fig. 3, as shown in fig. 5, coordinates of three pixels (for example, pixel 1, pixel 2, and pixel 3) in the first image 12 displayed by the first screen 10 are assumed to be (x11, y11), (x12, y12), and (x13, y13), respectively; the mobile phone can increase the abscissa (i.e., X-axis coordinate) in the coordinates (X11, Y11) of the pixel point 1 by a value a, and the ordinate (i.e., Y-axis coordinate) is unchanged, so as to obtain a second target pixel point corresponding to the pixel point 1, where the coordinate of the second target pixel point is (X21, Y11); adding a value a to the abscissa of the coordinate (x12, y12) of the pixel point 2, and keeping the ordinate unchanged, so as to obtain a second target pixel point corresponding to the pixel point 2, wherein the coordinate of the second target pixel point is (x22, y 12); increasing the value a of the abscissa of the coordinate (x13, y13) of the pixel point 3, and keeping the ordinate unchanged, so as to obtain a second target pixel point corresponding to the pixel point 3, wherein the coordinate of the second target pixel point is (x23, y 13); in this manner, the second image 14 can be displayed on the second screen 11.
For another example, in combination with fig. 3, as shown in fig. 6, the mobile phone may increase the abscissa of the coordinate (x11, y11) of the pixel point 1 by a value a, and decrease the ordinate by b, so as to obtain a second target pixel point corresponding to the pixel point 1, where the coordinate of the second target pixel point is (x21, y 21); increasing the abscissa of the coordinate (x12, y12) of the pixel point 2 by a value a, and decreasing the ordinate by b to obtain a second target pixel point corresponding to the pixel point 2, wherein the coordinate of the second target pixel point is (x22, y 22); increasing the abscissa of the coordinate (x13, y13) of the pixel point 3 by a value a, and decreasing the ordinate by b to obtain a second target pixel point corresponding to the pixel point 3, wherein the coordinate of the second target pixel point is (x23, y 23); in this manner, the second image 14 can be displayed on the second screen 11.
Optionally, in the embodiment of the present invention, the pixel information of the first target pixel may include a color value of the first target pixel; the pixel information of the second target pixel may include a color value of the second target pixel.
In the embodiment of the invention, the terminal equipment can display the second image corresponding to the first image on the second screen in a mode of mapping the information of each pixel point in the first image onto the second screen without drawing the second image again by a user, so that the drawing operation of the user can be simplified.
Optionally, in the embodiment of the present invention, the pixel point information may include a color value. Before the step 202, the image processing method provided by the embodiment of the present invention may further include the step 401 described below, and the step 202b may be specifically realized by the step 202b1 described below.
Step 401, the terminal device obtains a color value of each pixel point in the first image.
Optionally, in the embodiment of the present invention, when the user draws the first image on the first screen, the terminal device may obtain the color value of each pixel point in the first image in real time.
Step 202b1, the terminal device responds to the first input, maps the color value of the first target pixel point located at the first position in the first screen to the second target pixel point located at the second position on the second screen according to the preset color mapping relation, and generates and displays a second image on the second screen.
In the embodiment of the present invention, the terminal device may map the position information of the first target pixel in the first image to the second target pixel on the second screen according to the preset color mapping relationship and according to the position information of the first target pixel in the first image, and process the color of the second target pixel on the second screen according to the color value of the first target pixel in the first image, so that the color value of the second target pixel on the second screen is the same as the color value of the first target pixel in the first image.
It should be noted that, in the embodiment of the present invention, the execution order of the above step 301 and step 401 is not limited. Specifically, in a possible implementation manner, step 301 may be executed first, and then step 401 may be executed; that is, the terminal device may first obtain the location information of each pixel point in the first image, and then obtain the color value of each pixel point in the first image. In another possible implementation manner, step 401 may be performed first, and then step 301 may be performed; that is, the terminal device may first obtain the color value of each pixel point in the first image, and then obtain the location information of each pixel point in the first image. In yet another possible implementation, step 301 and step 401 may be performed simultaneously; the terminal device can simultaneously acquire the position information of each pixel point in the first image and the color value of each pixel point in the first image.
In the embodiment of the present invention, the terminal device may adopt a mode of mapping the color value of each pixel point in the first image onto the second screen to display the second image corresponding to the first image on the second screen without drawing the second image again by the user, so that the drawing operation of the user may be simplified.
And step 203, the terminal equipment receives a second input of the user.
In an embodiment of the present invention, the second input may be used to trigger the terminal device to display a target image (i.e., an image obtained by combining the first image and the second image).
Optionally, in the embodiment of the present invention, the second input may be an expansion input to the terminal device.
It should be noted that the second input may be a user input to expand the second screen such that the second screen rotates around an axis between the first screen and the second screen, and an included angle between the first screen and the second screen is within a second angle range. It is understood that the angle between the first screen and the second screen of the terminal device changes with the user's spreading input.
Alternatively, in the embodiment of the present invention, the second angle range may be 180 °.
Illustratively, in conjunction with fig. 5, as shown in fig. 7, the user may make a second input to the second screen 11 to rotate the second screen 11 about the axis 13 between the first screen 10 and the second screen 11, so that the included angle between the first screen 10 and the second screen 11 is within a second angle range (the included angle between the first screen 10 and the second screen 11 is illustrated as 180 ° in fig. 7).
And step 204, the terminal equipment responds to the second input and outputs the target image based on the first image and the second image.
In an embodiment of the present invention, the target image includes a third image and a fourth image; the third image is a partial image content of the first image or an entire image content of the first image, and the fourth image is a partial image content of the second image or an entire image content of the second image.
It will be appreciated that the target image may comprise a portion of the image content of the first image and a portion of the image content of the second image; alternatively, the target image may include a partial image content of the first image and an entire image content of the second image; alternatively, the target image may include the entire image content of the first image and a partial image content of the second image; alternatively, the target image may include the entire image content of the first image and the entire image content of the second image.
Optionally, in the embodiment of the present invention, the terminal device may display the target image on the target screen in response to the second input, where the target screen is the first screen or the second screen.
Optionally, in the embodiment of the present invention, the step 204 may be specifically implemented by the following step 204 a.
And step 204a, the terminal equipment carries out image synthesis on the first image and the second image, generates a target image and outputs the target image.
According to the embodiment of the invention, after the terminal equipment receives the second input, the first image and the second image can be merged to obtain the target image, and then the target image is displayed on the target screen. It is understood that the target image is an image obtained by synthesizing the first image and the second image.
Optionally, in the embodiment of the present invention, the terminal device may move the first image and/or the second image to splice the first image and the second image to obtain the target image.
Illustratively, in conjunction with fig. 7, as shown in fig. 8, the mobile phone may merge a first image 12 displayed on the first screen 10 and a second image 14 displayed on the second screen 11 to obtain a target image 15, and display the target image 15 on the first screen 10.
Optionally, in the embodiment of the present invention, after obtaining the target image, the terminal device may store the target image in the terminal device; alternatively, after the terminal device displays the target image on the target screen, the user may perform an operation with respect to the target image (e.g., perform an editing operation on the target image, etc.).
In the case of displaying a first image on a first screen, a terminal device may mirror the first image displayed on the first screen onto a second screen according to a first input of a user to display the second image on the second screen, and then display a target image (the target image includes a third image and a fourth image, the third image is a partial image content of the first image or an entire image content of the first image, and the fourth image is a partial image content of the second image or an entire image content of the second image) on the target screen according to a second input of the user. The target image displayed on the target screen by the terminal device comprises a third image and a fourth image, the third image is partial image content of the first image or all image content of the first image, the fourth image is partial image content of the second image or all image content of the second image, and the second image is obtained after the terminal device mirrors the first image displayed on the first screen and is not obtained after the terminal device draws the second image on the terminal device, so that the operation of a user can be simplified, and the time consumption of the operation of the user can be saved.
Optionally, after step 202b1, the image processing method according to the embodiment of the present invention may further include step 501 and step 502 described below.
Step 501, the terminal device receives a fourth input from the user.
In an embodiment of the present invention, the fourth input is a selection input of a target color control in at least one color control on the second screen.
Optionally, in this embodiment of the present invention, after the terminal device displays the second image on the second screen, at least one color control may be displayed on the second screen, where one color control is used to indicate one color adjustment mode.
Optionally, in this embodiment of the present invention, the terminal device may display the second image and the at least one color control on the same interface of the second screen; alternatively, the terminal device may display the second image and the at least one color control on different interfaces of the second screen, respectively.
It should be noted that, in the embodiment of the present invention, the target color control may be one color control or multiple color controls in at least one color control.
Optionally, in this embodiment of the present invention, after the terminal device displays the second image on the second screen, the user may perform an input (for example, a pressing input) on the second screen to trigger the terminal device to display the at least one color control on the second screen.
Optionally, in this embodiment of the present invention, after the user performs the press input on the second screen, the terminal device may detect a contact area between the finger of the user and the second screen, and in a case that the contact area is greater than a third threshold, the terminal device may display at least one color control on the second screen.
Optionally, in an embodiment of the present invention, the at least one color control may include at least one of: the color control comprises a color turning control, a color deepening control, a color lightening control, a user-defined color control and the like.
It can be understood that, when the target color control is a user-defined color control, the user may input the color adjustment mode in the color adjustment mode input area displayed on the second screen.
And 502, the terminal equipment responds to the fourth input and adjusts the color value of the second target pixel point according to the target color adjusting mode corresponding to the target color control.
In the embodiment of the present invention, the target color adjustment manner is a color adjustment manner indicated by the target color control.
In the embodiment of the present invention, the terminal device may process the color value of each pixel point in the second image (increase or decrease the color value of each pixel point by the fourth threshold value, respectively) according to the target color adjustment mode, so as to change the color of each pixel point in the second image, thereby changing the color of the second image.
Optionally, in the embodiment of the present invention, the terminal device may process the color value of the second image by using an algorithm corresponding to the target color adjustment manner, so as to obtain the processed second image.
Illustratively, it is assumed that the target color control is a color deepening control, the color deepening control is used to indicate a color deepening rule, and an algorithm corresponding to the color deepening rule is as follows: l denotes a color value of the image after the color deepening, and K denotes a current color value of the image.
As another example, assuming that the target color control is a user-defined color control, the user-defined color control is used to indicate a user-defined color adjustment manner, and an algorithm corresponding to the user-defined color adjustment manner is as follows: binary equations, square calculations, etc., and the user makes a selection input for one of these algorithms.
For example, in conjunction with fig. 5, as shown in (a) of fig. 9, the second image 14 and at least one color control (e.g., a color flipping control, a color deepening control, a color lightening control, and a user-defined color control) are displayed on the second screen 11 of the mobile phone, and after the user makes a selection input on a target color control (e.g., a user-defined color control) of the at least one color control, as shown in (B) of fig. 9, the mobile phone displays a corresponding color adjustment mode (e.g., a binary equation, an evolution calculation) and a color adjustment mode input area on the second screen 11, and the user can input the color adjustment mode in the color adjustment mode input area.
Optionally, in this embodiment of the present invention, the terminal device may also display at least one color control on the first screen, so that the user selects the color control from the at least one color control, thereby triggering the terminal device to process the color value of the first image displayed on the first screen.
In the embodiment of the invention, the terminal equipment can adjust the color value of the second target pixel point according to the target color adjustment mode corresponding to the target color control selected by the user so as to quickly change the color of the second image, thereby improving the display effect of the terminal equipment.
Optionally, in the embodiment of the present invention, before the step 203, the image processing method provided in the embodiment of the present invention may further include the following step 601 and step 602.
Step 601, the terminal device receives a fifth input of the user.
In an embodiment of the present invention, the fifth input is an input of bending the first screen and the second screen.
Optionally, in the embodiment of the present invention, both the first screen and the second screen may be flexible screens.
In the embodiment of the invention, the input of the user for bending the first screen can be used for triggering the movement of the first image displayed on the first screen; the user's input of the first screen bend may be used to trigger movement of a second image displayed on a second screen.
Step 602, the terminal device responds to a fifth input, and controls the first image and the second image to move towards or away from each other.
Wherein, the bending inputs in different directions correspond to movements in different directions.
Optionally, in this embodiment of the present invention, the terminal device may control the first image to move toward the direction close to the second image, and the second image to move toward the direction close to the first image, so that the distance between the first image and the second image decreases; alternatively, the terminal device may control the first image to move away from the second image, and the second image to move away from the first image, so that the distance between the first image and the second image increases.
Optionally, in the embodiment of the present invention, a user may perform a bending input on the first screen and the second screen to trigger the first image and the second image to move towards or away from each other along the first direction (lateral direction); and/or, the user may perform a bending input on the first screen and the second screen to trigger the first image and the second image to move towards or away from each other along the first direction (longitudinal direction). Wherein the first direction and the second direction are perpendicular.
It is understood that the layout of the first image and the second image in the target image displayed on the target screen by the terminal device is associated with the movement of the first image and the second image.
According to the embodiment of the invention, the terminal equipment can control the first image and the second image to move in opposite directions or back to back according to the bending input of the first screen and the second screen by the user, so that the display effect of the target image displayed by the terminal equipment is diversified, and the display effect of the terminal equipment is improved.
Optionally, with reference to fig. 2, as shown in fig. 10, after the step 202, the image processing method according to the embodiment of the present invention may further include a step 701 and a step 702, and the step 204 may be specifically implemented by a step 204b described below.
Step 701, the terminal device receives a sixth input of the user.
In an embodiment of the present invention, the sixth input is a selection input to the first region in the first image and/or to the second target region in the second image.
Optionally, in the embodiment of the present invention, the terminal device may display a region selection control on the second screen, and the user may perform selection input on the region selection control to trigger the terminal device to start a region selection function.
Optionally, in the embodiment of the present invention, after the area selection function is turned on, the terminal device may display two axes (for example, an X axis and a Y axis) on the first screen, so that the user may perform a drag input on the X axis and/or the Y axis, thereby selecting the first area from the first image; and/or, two axes (e.g., X-axis and Y-axis) may be displayed on the second screen such that the user may make a drag input on the X-axis and/or the Y-axis to select the second region from the second image.
Optionally, in the embodiment of the present invention, after the terminal device starts the area selection function, the user may select an area of any shape in the first image, so as to trigger the terminal device to determine the area of any shape as the first area; and/or, the user may select any shape of region in the second image to trigger the terminal device to determine the any shape of region as the second region.
Step 702, the terminal device responds to a sixth input, and acquires a fifth image and a sixth image.
In an embodiment of the present invention, the fifth image is an image in the first area in the first image, and the sixth image is the second image or an image in the second area in the second image; or, the fifth image is the first image, and the sixth image is an image located in the second area in the second image.
It is understood that the fifth image is an image in the first area in the first image, and the sixth image is the second image; or, the fifth image is an image in the first area in the first image, and the sixth image is an image in the second area in the second image; or, the fifth image is the first image, and the sixth image is an image located in the second area in the second image.
In an embodiment of the present invention, a size of the first area is smaller than or equal to a size of the first image; the size of the second area is smaller than or equal to the size of the second image.
It is understood that the image in the first image located in the first area is a partial image content of the first image or an entire image content of the first image; the image in the second area in the second image is a partial image content of the second image or the entire image content of the second image.
And step 204b, the terminal device responds to the second input, performs image synthesis on the fifth image and the sixth image, generates a target image and outputs the target image.
Optionally, in the embodiment of the present invention, the terminal device may perform image synthesis on the first image and the second image, generate the target image, and output the target image; or, the terminal device may perform image synthesis on the first image and an image located in the second region in the second image, generate a target image, and output the target image; or, the terminal device may perform image synthesis on the image located in the first region in the first image and the second image, generate a target image, and output the target image; alternatively, the terminal device may perform image synthesis on an image located in the first region in the first image and an image located in the second region in the second image, generate a target image, and output the target image.
Exemplarily, in conjunction with fig. 5, as shown in fig. 11 (a), a second image 14 and two axes (for example, an axis 16 and an axis 17) are displayed on the second screen 11 of the mobile phone, and the user may perform a drag input on the axis 16 and the axis 17 to trigger the mobile phone to determine an area 18 (illustrated by an area with a filling pattern in fig. 11 (a)) as the second area; the handset, after receiving the second input, may merge the first image 12 with the image in the area 18 to display the target image 15 as shown in (B) of fig. 11 on the first screen 10.
Further illustratively, in conjunction with fig. 5, as shown in (a) of fig. 12, a first image 12 and two axes (e.g., axis 16 and axis 17) are displayed on the first screen 10 of the mobile phone, and a second image 14 and two axes (e.g., axis 16 and axis 17) are displayed on the second screen 11, and the user may make a drag input on the axes 16 and 17 displayed on the first screen 10 to trigger the mobile phone to determine the area 19 as the first area, and make a drag input on the axes 16 and 17 displayed on the second screen 11 to trigger the mobile phone to determine the area 20 as the second area; the handset, after receiving the second input, may image-merge the image in the area 19 and the image in the area 20 to output the target image 15 as shown in (B) in fig. 12 (illustrated in (B) in fig. 12 to display the target image 15 on the first screen and the second screen).
In the embodiment of the invention, the user can select the first area from the first image and/or select the second area from the second image, so that the terminal equipment can generate and output the target image according to the selection input of the user, and the display effect of the target image displayed by the terminal equipment is diversified, thereby improving the display effect of the terminal equipment.
Fig. 13 shows a schematic diagram of a possible structure of a terminal device involved in the embodiment of the present invention, where the terminal device may include a first screen and a second screen. As shown in fig. 13, the terminal device 130 may include: a receiving unit 131, a processing unit 132 and an output unit 133.
The receiving unit 131 is configured to receive a first input from a user when the first image is displayed on the first screen. A processing unit 132, configured to mirror the first image displayed on the first screen onto the second screen in response to the first input received by the receiving unit 131, generate a second image and display the second image on the second screen. The receiving unit 131 is further configured to receive a second input from the user. An output unit 133 for outputting a target image including the third image and the fourth image based on the first image and the second image in response to the second input received by the receiving unit 131. The third image is a partial image content of the first image or an entire image content of the first image, and the fourth image is a partial image content of the second image or an entire image content of the second image.
In a possible implementation manner, the first input may be a bending input to the terminal device. The processing unit 132 is specifically configured to mirror the first image displayed on the first screen onto the second screen when an included angle between the first screen and the second screen is within a first angle range.
In a possible implementation manner, with reference to fig. 13, as shown in fig. 14, the terminal device 130 provided in the embodiment of the present invention may further include: an acquisition unit 134. The obtaining unit 134 is configured to, before the processing unit 132 mirrors the first image displayed on the first screen to the second screen, generate the second image and display the second image on the second screen, obtain the position information of each pixel point in the first image. The processing unit 132 is specifically configured to map, for each pixel point in the first image, pixel point information of a first target pixel point located at a first position in the first screen to a second target pixel point located at a second position on the second screen according to the position information of a first target pixel point in the first image acquired by the acquiring unit 134; and the second position is a position corresponding to the first position in the second screen.
In a possible implementation manner, the receiving unit 131 is further configured to receive a third input from the user before receiving the first input from the user, where the third input is a selection input from the user to the first image in the terminal device, or the third input is an input from the user to draw the first image on the first screen. The processing unit 132 is further configured to display the first image on the first screen in response to the third input received by the receiving unit 131, and record position information of each pixel point in the first image.
In a possible implementation manner, the pixel point information may include a color value. The obtaining unit 134 is further configured to, before the processing unit 132 mirrors the first image displayed on the first screen to the second screen, generate the second image and display the second image on the second screen, obtain a color value of each pixel point in the first image. The processing unit 132 is specifically configured to map the color value of the first target pixel point acquired by the acquiring unit 134 to a second target pixel point at a second position on a second screen according to a preset color mapping relationship.
In a possible implementation manner, the receiving unit 131 is further configured to receive a fourth input of the user after the processing unit 132 maps the color value of the first target pixel point to the second target pixel point at the second position on the second screen, where the fourth input is a selection input of a target color control in at least one color control on the second screen. With reference to fig. 13, as shown in fig. 15, the terminal device 130 according to the embodiment of the present invention may further include: an adjustment unit 135. The adjusting unit 135 is configured to adjust the color value of the second target pixel according to the target color adjustment manner corresponding to the target color control in response to the fourth input received by the receiving unit 131.
In a possible implementation manner, the receiving unit 131 is further configured to receive a fifth input of the user before receiving the second input of the user, where the fifth input is an input of bending the first screen and the second screen. With reference to fig. 13, as shown in fig. 16, the terminal device 130 according to the embodiment of the present invention may further include: a control unit 136. The control unit 136 is configured to control the first image and the second image to move towards or away from each other in response to a fifth input received by the receiving unit 131. Wherein, the bending inputs in different directions correspond to movements in different directions.
In a possible implementation manner, the receiving unit 131 is further configured to receive a sixth input of the user after the processing unit 132 generates and displays the second image on the second screen, where the sixth input is a selection input of the first area in the first image and/or the second area in the second image. As shown in fig. 14, the terminal device 130 provided in the embodiment of the present invention may further include: an acquisition unit 134. Wherein, the acquiring unit 134 is configured to acquire the fifth image and the sixth image in response to the sixth input received by the receiving unit 131. The output unit 133 is specifically configured to perform image synthesis on the fifth image and the sixth image acquired by the acquisition unit 134, generate a target image, and output the target image. The fifth image is the first image or an image in the first area in the first image, and the sixth image is the second image or an image in the second area in the second image.
In a possible implementation manner, the output unit 133 is specifically configured to perform image synthesis on the first image and the second image, generate a target image, and output the target image.
The terminal device provided by the embodiment of the present invention can implement each process implemented by the terminal device in the above method embodiments, and for avoiding repetition, detailed description is not repeated here.
In a case where a first image is displayed on a first screen, the terminal device may mirror the first image displayed on the first screen onto a second screen according to a first input of a user to display the second image on the second screen, and then display a target image (the target image includes a third image and a fourth image, the third image is a partial image content of the first image or an entire image content of the first image, and the fourth image is a partial image content of the second image or an entire image content of the second image) on the target screen according to a second input of the user. The target image displayed on the target screen by the terminal device comprises a third image and a fourth image, the third image is partial image content of the first image or all image content of the first image, the fourth image is partial image content of the second image or all image content of the second image, and the second image is obtained after the terminal device mirrors the first image displayed on the first screen and is not obtained after the terminal device draws the second image on the terminal device, so that the operation of a user can be simplified, and the time consumption of the operation of the user can be saved.
Fig. 17 is a hardware diagram of a terminal device for implementing various embodiments of the present invention. As shown in fig. 17, the terminal device 170 includes, but is not limited to: radio unit 171, network module 172, audio output unit 173, input unit 174, sensor 175, display unit 176, user input unit 177, interface unit 178, memory 179, processor 180, and power supply 181.
It should be noted that, as those skilled in the art will appreciate, the terminal device structure shown in fig. 17 does not constitute a limitation to the terminal device, and the terminal device may include more or less components than those shown in fig. 17, or may combine some components, or may arrange different components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The user input unit 177 is configured to receive a first input of a user when the first image is displayed on the first screen; and for receiving a second input by the user.
A processor 180 for mirroring a first image displayed on a first screen onto a second screen, generating and displaying a second image on the second screen, in response to a first input received by the user input unit 177; and a display unit for outputting a target image based on the first image and the second image in response to a second input received by the user input unit 177, the target image including a third image and a fourth image. The third image is a partial image content of the first image or an entire image content of the first image, and the fourth image is a partial image content of the second image or an entire image content of the second image.
In a case where a first image is displayed on a first screen, the terminal device may mirror the first image displayed on the first screen onto a second screen according to a first input of a user to display the second image on the second screen, and then display a target image (the target image includes a third image and a fourth image, the third image is a partial image content of the first image or an entire image content of the first image, and the fourth image is a partial image content of the second image or an entire image content of the second image) on the target screen according to a second input of the user. The target image displayed on the target screen by the terminal device comprises a third image and a fourth image, the third image is partial image content of the first image or all image content of the first image, the fourth image is partial image content of the second image or all image content of the second image, and the second image is obtained after the terminal device mirrors the first image displayed on the first screen and is not obtained after the terminal device draws the second image on the terminal device, so that the operation of a user can be simplified, and the time consumption of the operation of the user can be saved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 171 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 180; in addition, the uplink data is transmitted to the base station. Generally, the radio frequency unit 171 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 171 may also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 172, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 173 may convert audio data received by the radio frequency unit 171 or the network module 172 or stored in the memory 179 into an audio signal and output as sound. Also, the audio output unit 173 may also provide audio output related to a specific function performed by the terminal device 170 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 173 includes a speaker, a buzzer, a receiver, and the like.
The input unit 174 is used to receive an audio or video signal. The input Unit 174 may include a Graphics Processing Unit (GPU) 1741 and a microphone 1742, and the Graphics processor 1741 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 176. The image frames processed by the graphic processor 1741 may be stored in the memory 179 (or other storage medium) or transmitted via the radio frequency unit 171 or the network module 172. The microphone 1742 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 171 in case of the phone call mode.
The terminal device 170 further includes at least one sensor 175, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display 1761 according to the brightness of ambient light, and a proximity sensor that turns off the display 1761 and/or a backlight when the terminal device 170 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 175 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 176 is used to display information input by the user or information provided to the user. The Display unit 176 may include a Display panel 1761, and the Display panel 1761 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 177 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 177 includes a touch panel 1771 and other input devices 1772. The touch panel 1771, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1771 (e.g., operations by a user on or near the touch panel 1771 using a finger, a stylus, or any suitable object or attachment). The touch panel 1771 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 180, receives a command from the processor 180, and executes the command. In addition, the touch panel 1771 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1771, the user input unit 177 may also include other input devices 1772. In particular, other input devices 1772 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1771 can be overlaid on the display panel 1761, and when the touch panel 1771 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 180 to determine the type of the touch event, and then the processor 180 provides a corresponding visual output on the display panel 1761 according to the type of the touch event. Although the touch panel 1771 and the display panel 1761 are shown in fig. 17 as two separate components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1771 and the display panel 1761 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
The interface unit 178 is an interface for connecting an external device to the terminal apparatus 170. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 178 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 170 or may be used to transmit data between the terminal apparatus 170 and the external device.
The memory 179 may be used to store software programs as well as various data. The memory 179 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 179 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 180 is a control center of the terminal device, connects various parts of the entire terminal device using various interfaces and lines, and performs various functions of the terminal device and processes data by operating or executing software programs and/or modules stored in the memory 179 and calling data stored in the memory 179, thereby performing overall monitoring of the terminal device. Processor 180 may include one or more processing units; preferably, the processor 180 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 180.
The terminal device 170 may further include a power source 181 (e.g., a battery) for supplying power to various components, and preferably, the power source 181 may be logically connected to the processor 180 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
In addition, the terminal device 170 includes some functional modules that are not shown, and are not described in detail here.
Preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 180 as shown in fig. 17, a memory 179, and a computer program stored in the memory 179 and capable of running on the processor 180, where the computer program, when executed by the processor 180, implements each process of the foregoing method embodiment, and can achieve the same technical effect, and details are not described here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the method embodiments, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (19)

1. An image processing method is applied to a terminal device, the terminal device comprises a first screen and a second screen, and the method is characterized by comprising the following steps:
receiving a first input of a user in a case where a first image is displayed on the first screen;
in response to the first input, mirroring the first image displayed on the first screen onto the second screen, generating and displaying a second image on the second screen;
receiving a second input of the user;
outputting a target image on the first screen or the second screen based on the first image and the second image in response to the second input, the target image including a third image and a fourth image;
the third image is a partial image content of the first image or an entire image content of the first image, and the fourth image is a partial image content of the second image or an entire image content of the second image.
2. The method of claim 1, wherein the first input is a bending input to the terminal device;
the mirroring of the first image displayed on the first screen onto the second screen includes:
and under the condition that the included angle between the first screen and the second screen is within a first angle range, the first image displayed on the first screen is mirrored onto the second screen.
3. The method of claim 1 or 2, wherein before mirroring the first image displayed on the first screen onto the second screen, generating and displaying a second image on the second screen, the method further comprises:
acquiring position information of each pixel point in the first image;
the mirroring of the first image displayed on the first screen onto the second screen, generating and displaying a second image on the second screen includes:
for each pixel point in the first image, mapping the pixel point information of a first target pixel point at a first position in the first screen to a second target pixel point at a second position on the second screen according to the position information of the first target pixel point in the first image;
and the second position is a position corresponding to the first position in the second screen.
4. The method of claim 1 or 2, wherein prior to receiving the first input from the user, the method further comprises:
receiving a third input of a user, wherein the third input is a selection input of the first image in the terminal equipment by the user, or the third input is an input of the first image drawn on the first screen by the user;
in response to the third input, displaying the first image on the first screen and recording position information of each pixel point in the first image.
5. The method of claim 3, wherein the pixel point information comprises a color value;
before the mirroring of the first image displayed on the first screen onto the second screen, the generation of the second image and the display on the second screen, the method further includes:
obtaining a color value of each pixel point in the first image;
the mapping of the pixel point information of the first target pixel point located at the first position in the first screen to the second target pixel point located at the second position on the second screen includes:
and mapping the color value of the first target pixel point to a second target pixel point at a second position on the second screen according to a preset color mapping relation.
6. The method of claim 5, wherein after mapping the color value of the first target pixel to a second target pixel at a second location on the second screen, the method further comprises:
receiving a fourth input of the user, wherein the fourth input is a selection input of a target color control in the at least one color control on the second screen;
and responding to the fourth input, and adjusting the color value of the second target pixel point according to a target color adjustment mode corresponding to the target color control.
7. The method of claim 1, wherein prior to receiving the second input from the user, the method further comprises:
receiving a fifth input of a user, wherein the fifth input is an input for bending the first screen and the second screen;
controlling the first image and the second image to move towards or away from each other in response to the fifth input;
wherein, the bending inputs in different directions correspond to movements in different directions.
8. The method of claim 1, wherein after the second image is generated and displayed on the second screen, the method further comprises:
receiving a sixth input of a user, wherein the sixth input is a selection input of a first area in the first image and/or a second area in the second image;
acquiring a fifth image and a sixth image in response to the sixth input;
the outputting a target image based on the first image and the second image comprises:
carrying out image synthesis on the fifth image and the sixth image to generate and output a target image;
wherein the fifth image is an image of the first image located in the first region, and the sixth image is the second image or an image of the second image located in the second region; or, the fifth image is the first image, and the sixth image is an image in the second area in the second image.
9. The method of claim 1, wherein outputting a target image based on the first image and the second image comprises:
and carrying out image synthesis on the first image and the second image to generate and output a target image.
10. A terminal device, the terminal device including a first screen and a second screen, the terminal device comprising: a receiving unit, a processing unit and an output unit;
the receiving unit is used for receiving a first input of a user under the condition that a first image is displayed on the first screen;
the processing unit is used for responding to the first input received by the receiving unit, mirroring the first image displayed on the first screen to the second screen, generating a second image and displaying the second image on the second screen;
the receiving unit is further used for receiving a second input of the user;
the output unit is used for responding to the second input received by the receiving unit, and outputting a target image on the first screen or the second screen based on the first image and the second image, wherein the target image comprises a third image and a fourth image;
the third image is a partial image content of the first image or an entire image content of the first image, and the fourth image is a partial image content of the second image or an entire image content of the second image.
11. A terminal device according to claim 10, characterized in that the first input is a bending input to the terminal device;
the processing unit is specifically configured to mirror the first image displayed on the first screen onto the second screen when an included angle between the first screen and the second screen is within a first angle range.
12. The terminal device according to claim 10 or 11, wherein the terminal device further comprises: an acquisition unit;
the acquiring unit is configured to acquire position information of each pixel point in the first image before the processing unit mirrors the first image displayed on the first screen onto the second screen, generates a second image, and displays the second image on the second screen;
the processing unit is specifically configured to map, for each pixel point in the first image, pixel point information of a first target pixel point located at a first position in the first screen to a second target pixel point located at a second position on the second screen according to the position information of the first target pixel point in the first image acquired by the acquiring unit;
and the second position is a position corresponding to the first position in the second screen.
13. The terminal device according to claim 10 or 11, wherein the receiving unit is further configured to receive a third input from the user before receiving the first input from the user, where the third input is a user selection input for the first image in the terminal device, or the third input is an input for the user to draw the first image on the first screen;
the processing unit is further configured to display the first image on the first screen in response to the third input received by the receiving unit, and record position information of each pixel point in the first image.
14. The terminal device of claim 12, wherein the pixel point information includes a color value;
the acquiring unit is further configured to acquire a color value of each pixel point in the first image before the processing unit mirrors the first image displayed on the first screen onto the second screen, generates a second image and displays the second image on the second screen;
the processing unit is specifically configured to map the color value of the first target pixel point obtained by the obtaining unit to a second target pixel point at a second position on the second screen according to a preset color mapping relationship.
15. The terminal device according to claim 14, wherein the receiving unit is further configured to receive a fourth input from the user after the processing unit maps the color value of the first target pixel point to a second target pixel point at a second position on the second screen, where the fourth input is a selection input of a target color control in at least one color control on the second screen;
the terminal device further includes: an adjustment unit;
and the adjusting unit is used for responding to the fourth input received by the receiving unit and adjusting the color value of the second target pixel point according to the target color adjusting mode corresponding to the target color control.
16. The terminal device according to claim 10, wherein the receiving unit is further configured to receive a fifth input from the user before receiving the second input from the user, where the fifth input is an input that bends the first screen and the second screen;
the terminal device further includes: a control unit;
the control unit is used for responding to the fifth input received by the receiving unit and controlling the first image and the second image to move towards or away from each other;
wherein, the bending inputs in different directions correspond to movements in different directions.
17. The terminal device according to claim 10, wherein the receiving unit is further configured to receive a sixth input from the user after the processing unit generates and displays the second image on the second screen, where the sixth input is a selection input for the first area in the first image and/or the second area in the second image;
the terminal device further includes: an acquisition unit;
the acquiring unit is used for responding to the sixth input received by the receiving unit and acquiring a fifth image and a sixth image;
the output unit is specifically configured to perform image synthesis on the fifth image and the sixth image, generate a target image, and output the target image;
the fifth image is the first image or an image in the first image, which is located in the first area, and the sixth image is the second image or an image in the second image, which is located in the second area.
18. The terminal device according to claim 10, wherein the output unit is specifically configured to perform image synthesis on the first image and the second image, generate a target image, and output the target image.
19. A terminal device, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the image processing method according to any one of claims 1 to 9.
CN201811595054.1A 2018-12-25 2018-12-25 Image processing method and terminal equipment Active CN109842722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811595054.1A CN109842722B (en) 2018-12-25 2018-12-25 Image processing method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811595054.1A CN109842722B (en) 2018-12-25 2018-12-25 Image processing method and terminal equipment

Publications (2)

Publication Number Publication Date
CN109842722A CN109842722A (en) 2019-06-04
CN109842722B true CN109842722B (en) 2021-02-02

Family

ID=66883367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811595054.1A Active CN109842722B (en) 2018-12-25 2018-12-25 Image processing method and terminal equipment

Country Status (1)

Country Link
CN (1) CN109842722B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115113834A (en) * 2021-03-22 2022-09-27 Oppo广东移动通信有限公司 Display method, electronic equipment and storage medium
CN112965681B (en) * 2021-03-30 2022-12-23 维沃移动通信有限公司 Image processing method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1507386A1 (en) * 2003-08-14 2005-02-16 Fuji Photo Film Co., Ltd. Image pickup device and image synthesizing method
CN109002243A (en) * 2018-06-28 2018-12-14 维沃移动通信有限公司 A kind of image parameter adjusting method and terminal device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1507386A1 (en) * 2003-08-14 2005-02-16 Fuji Photo Film Co., Ltd. Image pickup device and image synthesizing method
CN109002243A (en) * 2018-06-28 2018-12-14 维沃移动通信有限公司 A kind of image parameter adjusting method and terminal device

Also Published As

Publication number Publication date
CN109842722A (en) 2019-06-04

Similar Documents

Publication Publication Date Title
CN108495029B (en) Photographing method and mobile terminal
CN110908558B (en) Image display method and electronic equipment
CN109725683B (en) Program display control method and folding screen terminal
CN108319418B (en) Interface display control method and mobile terminal
CN111026316A (en) Image display method and electronic equipment
CN111010512A (en) Display control method and electronic equipment
CN111031234B (en) Image processing method and electronic equipment
CN109348019B (en) Display method and device
EP4131067A1 (en) Detection result output method, electronic device, and medium
EP3731506A1 (en) Image display method and mobile terminal
CN109413264B (en) Background picture adjusting method and terminal equipment
CN109448069B (en) Template generation method and mobile terminal
CN108174110B (en) Photographing method and flexible screen terminal
CN110908517B (en) Image editing method, image editing device, electronic equipment and medium
CN111031178A (en) Video stream clipping method and electronic equipment
WO2020192298A1 (en) Image processing method and terminal device
CN110908750B (en) Screen capturing method and electronic equipment
CN110602390B (en) Image processing method and electronic equipment
CN110555815B (en) Image processing method and electronic equipment
CN110536005B (en) Object display adjustment method and terminal
CN110717964B (en) Scene modeling method, terminal and readable storage medium
CN109842722B (en) Image processing method and terminal equipment
CN109859718B (en) Screen brightness adjusting method and terminal equipment
CN109104573B (en) Method for determining focusing point and terminal equipment
CN108833791B (en) Shooting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant