CN109729269B - Image processing method, terminal equipment and computer readable storage medium - Google Patents

Image processing method, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN109729269B
CN109729269B CN201811622123.3A CN201811622123A CN109729269B CN 109729269 B CN109729269 B CN 109729269B CN 201811622123 A CN201811622123 A CN 201811622123A CN 109729269 B CN109729269 B CN 109729269B
Authority
CN
China
Prior art keywords
image
images
image display
original
display parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811622123.3A
Other languages
Chinese (zh)
Other versions
CN109729269A (en
Inventor
寇飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811622123.3A priority Critical patent/CN109729269B/en
Publication of CN109729269A publication Critical patent/CN109729269A/en
Application granted granted Critical
Publication of CN109729269B publication Critical patent/CN109729269B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides an image processing method, a terminal device and a computer readable storage medium. The method is applied to the terminal equipment and comprises the following steps: collecting M original images; wherein M is an integer greater than or equal to 2; generating and displaying N reference images according to at least one original image in the M original images; wherein N is an integer greater than or equal to 2, and at least one image display parameter of any two reference images is different; receiving an input operation on a target reference image in the N reference images; and responding to the input operation, and performing image synthesis on the M original images according to the image display parameters of the target reference image to generate a target image. The embodiment of the invention is based on the interaction between the terminal equipment and the user, and can conveniently, reliably and conveniently obtain the image which is in line with the preference of the user.

Description

Image processing method, terminal equipment and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, a terminal device, and a computer-readable storage medium.
Background
With the rapid development of the communication technology field, the use of terminal devices such as mobile phones and tablet computers is more and more common. Nowadays, many consumers increasingly rely on terminal devices, for example, for image capturing.
When a High-Dynamic Range (HDR) photographing mode is adopted for photographing, the photographing of the terminal device is a one-key operation, and the specific process is as follows: the terminal equipment collects a plurality of original images firstly, and then carries out image synthesis on the plurality of original images according to a set mode to obtain a final image.
However, the effect of the image obtained by the terminal device is probably not in accordance with the preference of the user, and the user needs to edit the image after obtaining the image, which is very tedious.
Disclosure of Invention
The embodiment of the invention provides an image processing method, terminal equipment and a computer readable storage medium, which are used for solving the problem that the process of obtaining an image which accords with the preference of a user by the terminal equipment is very complicated when the terminal equipment takes a picture.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, which is applied to a terminal device, and the method includes:
collecting M original images; wherein M is an integer greater than or equal to 2;
generating and displaying N reference images according to at least one original image in the M original images; wherein N is an integer greater than or equal to 2, and at least one image display parameter of any two reference images is different;
receiving an input operation on a target reference image in the N reference images;
and responding to the input operation, and performing image synthesis on the M original images according to the image display parameters of the target reference image to generate a target image.
In a second aspect, an embodiment of the present invention provides a terminal device, where the terminal device includes:
the acquisition module is used for acquiring M original images; wherein M is an integer greater than or equal to 2;
the first processing module is used for generating and displaying N reference images according to at least one original image in the M original images; wherein N is an integer greater than or equal to 2, and at least one image display parameter of any two reference images is different;
the receiving module is used for receiving input operation of a target reference image in the N reference images;
and the synthesis module is used for responding to the input operation, and carrying out image synthesis on the M original images according to the image display parameters of the target reference image to generate a target image.
In a third aspect, an embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the image processing method described above.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the image processing method described above.
Therefore, in the embodiment of the invention, during photographing, a series of reference images with different image display parameters are provided for a user to select in the image synthesis link, and image synthesis is carried out according to the image display parameters of the reference images selected by the user, so that the user can participate in the image synthesis link, and the style and the image display effect of the target image obtained through image synthesis can accord with the preference of the user. Therefore, compared with the prior art, the embodiment of the invention can obtain the image which is in line with the preference of the user very conveniently and reliably based on the interaction (which can be called as man-machine interaction) between the terminal equipment and the user when taking the picture.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flow chart of an image processing method provided by an embodiment of the invention;
FIG. 2 is a second flowchart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a third flowchart of an image processing method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 5 is a second schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 6 is a third schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 7 is a fourth schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of another terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
First, an image processing method according to an embodiment of the present invention will be described.
It should be noted that the image processing method provided by the embodiment of the present invention is applied to a terminal device. Specifically, the terminal device may be: computers (Computer), Mobile phones, Tablet Personal computers (Tablet Personal Computer), Laptop computers (Laptop Computer), Personal Digital Assistants (PDA), Mobile Internet Devices (MID), Wearable devices (Wearable Device), and the like.
Referring to fig. 1, a flowchart of an image processing method according to an embodiment of the present invention is shown. As shown in fig. 1, the method is applied to a terminal device, and includes the following steps:
step 101, collecting M original images; wherein M is an integer greater than or equal to 2.
It should be noted that the terminal device in the embodiment of the present invention may specifically be a terminal device having an HDR photographing function. Currently, the HDR technology is widely used in terminal devices with HDR photographing functions, and can effectively solve the problem that the dynamic range of a camera sensor is insufficient, so that the brightest part of a photographed image is overexposed and/or the darkest part of the photographed image is underexposed.
In step 101, the terminal device may be in an HDR photographing mode, and acquire M original images in the HDR photographing mode; the M original images can be shot in the same scene, and the image brightness of any two original images in the M original images is different.
It is noted that, before performing the capturing operation of the original image, the terminal device may determine the number of images to be captured (i.e., the value of M), and the exposure parameter of each image to be captured. Specifically, the number of images and the exposure parameters may be fixedly set in advance; alternatively, the number of images and the exposure parameter may be dynamically determined according to the representation of a single image of automatic photometry in the current scene, in which case, the number of images with different brightness and the exposure parameter of each image required for capturing the current scene can be estimated as accurately as possible according to one image.
102, generating and displaying N reference images according to at least one original image in the M original images; wherein N is an integer greater than or equal to 2, and at least one image display parameter of any two reference images is different.
Here, M and N may be 2, 3, 4, 5, or an integer greater than 5, and the values of M and N may be the same or different.
The image display parameters include, but are not limited to, image brightness, image contrast, image sharpness, image color shade, and the like. Since at least one image display parameter of any two reference images is different, the following situations may exist for any two reference images: one reference image is brighter and the other reference image is darker; the contrast of one reference image is higher, and the contrast of the other reference image is lower; the sharpness of one reference image is higher, and the sharpness of the other reference image is lower; one reference image is more colorful and the other reference image is less colorful.
In addition, the terminal device can generate and display N reference images according to partial original images in the M original images; alternatively, the terminal device may generate and display N reference images from all of the M original images. In order to facilitate the understanding of the present invention for those skilled in the art, the latter case is used as an example in each embodiment of the present invention.
Step 103, receiving an input operation on a target reference image in the N reference images.
Specifically, in the case that N reference images are displayed on the terminal device, the user may perform a touch operation on the target reference image in the N reference images, such as a click operation, a press operation, a drag operation, and the like, which may be an input operation in step 103.
It is emphasized that the target reference image is any one of the N reference images, and the "target" in the target reference image does not constitute any limitation on the target reference image.
And 104, responding to the input operation, and performing image synthesis on the M original images according to the image display parameters of the target reference image to generate a target image.
The image synthesis means that a plurality of images are synthesized into one image, and there are two synthesis methods, namely, an exposure fusion method and a tone mapping method, which are commonly used at present. When the former method is used, the terminal device may combine M original images together to create a single target image that only retains correctly exposed elements according to the image display parameters of the target reference image; when the latter method is used, the terminal device may synthesize M original images into a high-level HDR image according to the image display parameters of the target reference image, and then perform tone mapping on the high-level HDR image to obtain the target image.
In the embodiment of the invention, in the HDR photographing mode, after acquiring M original images, the terminal device can generate and display N parameter images according to at least one of the M original images; at least one image display parameter of any two reference images is different, and correspondingly, the styles and the image display effects of any two reference images are also different. Next, the user may perform an input operation on the target reference image of the N parameter images according to his/her preference to select the style and image display effect of the target reference image. And then, the terminal equipment carries out image synthesis on the M original images according to the image display parameters of the target reference image, so that the style and the image display effect of the target image generated by the terminal equipment through image synthesis are consistent with those of the target reference image, and the target image can better accord with the preference of a user.
Therefore, in the embodiment of the invention, during photographing, a series of reference images with different image display parameters are provided for a user to select in the image synthesis link, and image synthesis is carried out according to the image display parameters of the reference images selected by the user, so that the user can participate in the image synthesis link, and the style and the image display effect of the target image obtained through image synthesis can accord with the preference of the user. Therefore, compared with the prior art, the embodiment of the invention can obtain the image which is in line with the preference of the user very conveniently and reliably based on the interaction (which can be called as man-machine interaction) between the terminal equipment and the user when taking the picture.
It should be noted that, in the prior art, a photographing style is often preset in the terminal device, once the preset style has a certain difference from a style preferred by the user, after the terminal device obtains one image through image synthesis each time, the user needs to edit and adjust the image through an image processing tool, which greatly affects the photographing experience of the user. In addition, in the prior art, part of image information is lost in the image synthesis process and cannot be remedied; limited by screen size and convenience of operation, the editing and adjusting operation is very inconvenient to implement and is difficult to adjust to an ideal effect; also, any adjustments improve one aspect of the image while sacrificing another aspect of the image. In comparison, the embodiment of the invention improves the image synthesis link, so that the user can select the favorite style and the image display effect in the image synthesis link, and the image information of M original images can be fully utilized in the image synthesis process, thereby being beneficial to generating a high-quality target image without damage.
Referring to fig. 2, a second flowchart of an image processing method according to an embodiment of the present invention is shown. The embodiment shown in fig. 2 differs from the embodiment shown in fig. 1 mainly in that: one particular way of generating and displaying N reference images is provided. As shown in fig. 2, the method comprises the steps of:
step 201, collecting M original images; wherein M is an integer greater than or equal to 2.
The specific implementation process of step 201 may refer to the description of step 101, and is not described herein again.
Step 202, generating M processed images according to the M original images.
Here, the M processed pictures and the M original images may be regarded as a one-to-one correspondence relationship.
Step 203, acquiring N groups of image display parameters; wherein at least one image display parameter of any two sets of image display parameters is different.
The number of the image display parameters in each group of image display parameters can be 3, 4, 5 or an integer greater than 5.
And 204, respectively carrying out image synthesis on the M processed images according to each group of image display parameters in the N groups of image display parameters, and generating and displaying N reference images.
Here, the N reference images and the N sets of image display parameters may be regarded as a one-to-one correspondence relationship.
Step 205, receiving an input operation for a target reference image in the N reference images.
And step 206, responding to the input operation, and performing image synthesis on the M original images according to the image display parameters of the target reference image to generate a target image.
The specific implementation process of step 205 to step 206 may refer to the description of step 103 to step 104, and is not described herein again.
In the embodiment of the present invention, the N reference images and the N sets of image display parameters may be regarded as a one-to-one correspondence relationship, so that any reference image (assumed to be P1) can present a style and an image display effect of an image that can be obtained by the terminal device after the M processed images are image-synthesized according to the corresponding set of image display parameters (assumed to be Z1).
Further, since the M processed images and the M original images may be regarded as a one-to-one relationship, the style and the image display effect exhibited by P1 may be regarded as the style and the image display effect of the image that can be obtained by the terminal device after the M original images are image-synthesized according to Z1. Therefore, according to the N parameter images, a user can know the styles and image display effects of the M original images after image synthesis according to different groups of image display parameters, so that the styles and the image display effects which accord with the preference of the user can be selected conveniently, and the target image finally generated by the terminal equipment accords with the preference of the user.
Optionally, step 202 includes:
each of the M original images is subjected to reduction processing to generate M processed images.
It should be noted that, after performing the reduction processing on any original image in the M original images, the resolution of the obtained processed image is smaller than that of the original image, for example, the resolution of the original image may be 900 ten thousand pixels, and the resolution of the processed image may be 100 ten thousand pixels. In this way, when the M processed images are subsequently image-synthesized to generate the reference image, the computational complexity of the synthesis process can be reduced and the time taken for the synthesis process can be shortened, as compared with the case where the M original images are image-synthesized.
Note that the manner of generating M processed images from M original images is not limited to the above. For example, the terminal device may obtain another M images from the local, which are the same as the shooting scenes of the M original images, and treat the other M images as M processed images.
Referring to fig. 3, a third flowchart of an image processing method according to an embodiment of the present invention is shown. The main difference between the embodiment shown in fig. 3 and the embodiment shown in fig. 2 is that: one particular way of generating N sets of image display parameters is provided. As shown in fig. 3, the method comprises the steps of:
step 301, collecting M original images; wherein M is an integer greater than or equal to 2.
Step 302, generating M processed images according to the M original images.
The specific implementation process of steps 301 to 302 may refer to the description of steps 201 to 202, and is not described herein again.
Step 303, calling a parameter generation model to generate N groups of image display parameters; wherein at least one image display parameter of any two sets of image display parameters is different.
Specifically, the terminal device may have an Artificial Intelligence (AI) engine therein, and the parameter generation model may be located in the AI engine.
And step 304, respectively carrying out image synthesis on the M processed images according to each group of image display parameters in the N groups of image display parameters, and generating and displaying N reference images.
Step 305, receiving an input operation on a target reference image in the N reference images.
And step 306, responding to the input operation, and performing image synthesis on the M original images according to the image display parameters of the target reference image to generate a target image.
The specific implementation process from step 304 to step 306 may refer to the description from step 204 to step 206, and is not described herein again.
Step 307, comparing the image display parameters of the target reference image with the image display parameters of other reference images to obtain a comparison result; wherein, the other reference images are reference images except the target reference image in the N reference images;
and 308, adjusting the model parameters of the parameter generation model according to the comparison result.
It should be noted that, when the terminal device leaves the factory, the parameter generation model in the AI engine may be a model obtained by an engineer through training. In the process of using the terminal device by the user, the model parameters in the parameter generation model can be adjusted according to the input operation of the user.
Supposing that after 9 original images are acquired at a certain time (that is, the value of M is 9), the terminal device respectively performs reduction processing on each original image in the 9 original images to generate 9 processed images, and the terminal device can call a parameter generation model to generate 3 groups of image display parameters (that is, the value of N is 3); wherein the image contrast D1 in the first set of image display parameters is the largest, the image contrast D2 in the second set of image display parameters is the next largest, and the image contrast D3 in the third set of image display parameters is the smallest.
Next, the terminal device performs image synthesis on the 9 processed images according to the first set of image display parameters to generate a first reference image; performing image synthesis on the 9 processed images according to the second group of image display parameters to generate a second reference image; image synthesis is performed on the 9 processed images according to the third set of image display parameters to generate a third reference image.
Then, if the user selects the first reference image through the input operation, the terminal device performs image synthesis on the 9 original images according to the first group of image display parameters to generate a target image; the terminal device also compares the first set of image display parameters with the second set of image display parameters and the third set of image display parameters, respectively. Since the image contrast D1 in the first set of image display parameters is the largest, the terminal device may adjust the model parameters of the parameter generation model, so that the image contrast in the 3 sets of image display parameters generated by the parameter generation model next time is all improved by a certain percentage on the basis of this time.
If the user selects the third reference image through the input operation, the terminal device may adjust the model parameters of the parameter generation model, so that the image contrast in the 3 sets of image display parameters generated by the parameter generation model next time is all reduced by a certain percentage on the basis of this time.
It can be seen that, in the embodiment of the present invention, the terminal device may correspondingly adjust the model parameters of the parameter generation model according to the image display parameters of the reference image selected by the user through the input operation. Therefore, after the user uses the terminal equipment for a period of time, the parameter generation model becomes a model which is trained by the user through learning, and the subsequent calling of the parameter generation model by the terminal equipment can generate a reference image which is more likely to be selected by the user for the user to select, so that a target image generated subsequently by the terminal equipment can better accord with the preference of the user.
Optionally, before the image synthesis is performed on the M original images, the method further includes:
carrying out alignment processing and motion compensation processing on the M original images;
performing image synthesis on the M original images, wherein the image synthesis comprises the following steps:
and performing image synthesis on the M original images subjected to the alignment processing and the motion compensation processing.
It should be noted that, M original images are generally acquired by the same camera in sequence, and in the image acquisition process, the camera may cause overall image shift due to slight movement of the user's hand, and an object in a shooting scene may move. Therefore, in this embodiment, the terminal device may perform alignment processing and motion compensation processing on the M original images, and perform image synthesis according to the M original images subjected to the alignment processing and the motion compensation processing, so as to effectively ensure the image quality of the target image obtained by synthesis.
As can be seen from the above embodiments, the HDR photographing process mainly includes the following five steps: in the first link, in an HDR photographing mode, M original images with different brightness are collected for the same scene; in the second link, performing alignment processing and motion compensation processing on the M original images; in the third link, performing reduction processing on each original image in the M original images respectively to generate M processed images, calling a parameter generation model to generate N groups of image display parameters, and generating N reference images; in the fourth link, N reference images are presented to a user, the user selects a favorite reference image, and at the moment, the terminal equipment performs image synthesis on M original images according to the image display parameters of the reference image selected by the user to generate a target image; the terminal equipment also transmits the image display parameters of the reference image selected by the user to the AI engine so as to adjust the model parameters of the parameter generation model, thereby adjusting the parameter generation tendency of the parameter generation model; in the fifth step, the terminal device repeats the steps, so that after the user selects the reference image for a plurality of times, the image display parameters generated by the parameter generation model are very close to the preference of the user, and at this time, the user can close the step of manually selecting the reference image through manual operation.
In summary, compared with the prior art, the embodiment of the invention can obtain the image which is in line with the preference of the user very conveniently, quickly and reliably based on the interaction between the terminal device and the user when taking the picture.
The following describes a terminal device provided in an embodiment of the present invention.
Referring to fig. 4, a schematic structural diagram of a terminal device 400 according to an embodiment of the present invention is shown. As shown in fig. 4, the terminal device 400 includes:
the acquisition module 401 is used for acquiring M original images; wherein M is an integer greater than or equal to 2;
a first processing module 402, configured to generate and display N reference images according to at least one original image of the M original images; wherein N is an integer greater than or equal to 2, and at least one image display parameter of any two reference images is different;
a receiving module 403, configured to receive an input operation on a target reference image in the N reference images;
and a synthesizing module 404, configured to perform image synthesis on the M original images according to the image display parameters of the target reference image in response to the input operation, so as to generate a target image.
Optionally, on the basis of fig. 4, as shown in fig. 5, the first processing module 402 includes:
a generating unit 4021 configured to generate M processed images from the M original images;
an obtaining unit 4022, configured to obtain N sets of image display parameters; wherein at least one image display parameter of any two groups of image display parameters is different;
the processing unit 4023 is configured to perform image synthesis on the M processed images according to each of the N sets of image display parameters, and generate and display N reference images.
Optionally, the generating unit 4021 is specifically configured to:
each of the M original images is subjected to reduction processing to generate M processed images.
Optionally, the obtaining unit 4022 is specifically configured to:
calling a parameter generation model to generate N groups of image display parameters;
on the basis of fig. 5, as shown in fig. 6, the terminal device 400 further includes:
the comparing module 411 is configured to compare the image display parameters of the target reference image with the image display parameters of other reference images after receiving an input operation on the target reference image in the N reference images, so as to obtain a comparison result; wherein, the other reference images are reference images except the target reference image in the N reference images;
and an adjusting module 412, configured to adjust the model parameters of the parameter generating model according to the comparison result.
Optionally, on the basis of fig. 4, as shown in fig. 7, the terminal device 400 further includes:
a second processing module 421, configured to perform alignment processing and motion compensation processing on the M original images before performing image synthesis on the M original images;
the synthesis module 403 is specifically configured to:
and performing image synthesis on the M original images subjected to the alignment processing and the motion compensation processing.
Therefore, in the embodiment of the invention, when photographing is carried out, a series of reference images with different image display parameters are provided for a user to select in the image synthesis link, and image synthesis is carried out according to the image display parameters of the reference images selected by the user, so that the user can participate in the image synthesis link, and the style and the image display effect of the target image obtained by image synthesis can accord with the preference of the user. Therefore, compared with the prior art, when taking a picture, based on the interaction between the terminal device 200 and the user (which may be referred to as human-computer interaction), the embodiment of the present invention can obtain an image that meets the preference of the user very conveniently and reliably.
Referring to fig. 8, a schematic diagram of a hardware structure of a terminal device 800 implementing various embodiments of the present invention is shown. As shown in fig. 8, terminal device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, a processor 810, and a power supply 811. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 8 does not constitute a limitation of the terminal device, and that terminal device 800 may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
Wherein, the processor 810 is configured to:
collecting M original images; wherein M is an integer greater than or equal to 2;
generating and displaying N reference images according to at least one original image in the M original images; wherein N is an integer greater than or equal to 2, and at least one image display parameter of any two reference images is different;
receiving an input operation on a target reference image in the N reference images;
and responding to the input operation, and performing image synthesis on the M original images according to the image display parameters of the target reference image to generate a target image.
Optionally, the processor 810 is specifically configured to:
generating M processed images according to the M original images;
obtaining N groups of image display parameters; wherein at least one image display parameter of any two groups of image display parameters is different;
and respectively carrying out image synthesis on the M processed images according to each group of image display parameters in the N groups of image display parameters, and generating and displaying N reference images.
Optionally, the processor 810 is specifically configured to:
each of the M original images is subjected to reduction processing to generate M processed images.
Optionally, the processor 810 is specifically configured to:
calling a parameter generation model to generate N groups of image display parameters;
a processor 810, further configured to:
after receiving input operation on a target reference image in the N reference images, comparing image display parameters of the target reference image with image display parameters of other reference images to obtain a comparison result; wherein, the other reference images are reference images except the target reference image in the N reference images;
and adjusting the model parameters of the parameter generation model according to the comparison result.
Optionally, the processor 810 is further configured to:
before image synthesis is carried out on the M original images, carrying out alignment processing and motion compensation processing on the M original images;
the processor 810 is specifically configured to:
and performing image synthesis on the M original images subjected to the alignment processing and the motion compensation processing.
Therefore, in the embodiment of the invention, when photographing is carried out, a series of reference images with different image display parameters are provided for a user to select in the image synthesis link, and image synthesis is carried out according to the image display parameters of the reference images selected by the user, so that the user can participate in the image synthesis link, and the style and the image display effect of the target image obtained by image synthesis can accord with the preference of the user. Therefore, compared with the prior art, when taking a picture, based on the interaction (which may be called human-computer interaction) between the terminal device 800 and the user, the embodiment of the present invention can obtain an image that meets the preference of the user very conveniently and reliably.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 801 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 810; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 801 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 801 can also communicate with a network and other devices through a wireless communication system.
The terminal device 800 provides the user with wireless broadband internet access through the network module 802, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 803 may convert audio data received by the radio frequency unit 801 or the network module 802 or stored in the memory 809 into an audio signal and output as sound. Also, the audio output unit 803 may also provide audio output related to a specific function performed by the terminal apparatus 800 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 803 includes a speaker, a buzzer, a receiver, and the like.
The input unit 804 is used for receiving an audio or video signal. The input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics processor 8041 processes image data of a still picture or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 806. The image frames processed by the graphics processor 8041 may be stored in the memory 809 (or other storage medium) or transmitted via the radio frequency unit 801 or the network module 802. The microphone 8042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 801 in case of a phone call mode.
The terminal device 800 also includes at least one sensor 805, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 8061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 8061 and/or the backlight when the terminal device 800 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 805 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 806 is used to display information input by the user or information provided to the user. The Display unit 806 may include a Display panel 8061, and the Display panel 8061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 807 is operable to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 807 includes a touch panel 8071 and other input devices 8072. The touch panel 8071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 8071 (e.g., operations by a user on or near the touch panel 8071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 8071 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 810, receives a command from the processor 810, and executes the command. In addition, the touch panel 8071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 8071, the user input unit 807 can include other input devices 8072. In particular, other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 8071 can be overlaid on the display panel 8061, and when the touch panel 8071 detects a touch operation on or near the touch panel 8071, the touch operation is transmitted to the processor 810 to determine the type of the touch event, and then the processor 810 provides a corresponding visual output on the display panel 8061 according to the type of the touch event. Although in fig. 8, the touch panel 8071 and the display panel 8061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 8071 and the display panel 8061 may be integrated to implement the input and output functions of the terminal device, and this is not limited herein.
The interface unit 808 is an interface for connecting an external device to the terminal apparatus 800. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 808 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 800 or may be used to transmit data between the terminal apparatus 800 and an external device.
The memory 809 may be used to store software programs as well as various data. The memory 809 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 809 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 810 is a control center of the terminal device 800, connects various parts of the entire terminal device by various interfaces and lines, executes various functions of the terminal device 800 and processes data by running or executing software programs and/or modules stored in the memory 809 and calling data stored in the memory 809, thereby monitoring the terminal device 800 as a whole. Processor 810 may include one or more processing units; preferably, the processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
Terminal device 800 may also include a power supply 811 (such as a battery) for powering the various components, and preferably, power supply 811 may be logically coupled to processor 810 via a power management system to provide management of charging, discharging, and power consumption via the power management system.
In addition, the terminal device 800 includes some functional modules that are not shown, and are not described in detail here.
Preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 810, a memory 809, and a computer program stored in the memory 809 and capable of running on the processor 810, where the computer program is executed by the processor 810 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An image processing method applied to a terminal device is characterized by comprising the following steps:
collecting M original images; wherein M is an integer greater than or equal to 2;
generating and displaying N reference images according to at least one original image in the M original images; wherein N is an integer greater than or equal to 2, and at least one image display parameter of any two reference images is different;
receiving an input operation on a target reference image in the N reference images;
responding to the input operation, and performing image synthesis on the M original images according to the image display parameters of the target reference image to generate a target image;
generating and displaying N reference images according to at least one original image in the M original images, wherein the method comprises the following steps:
generating M processed images according to the M original images;
obtaining N groups of image display parameters; wherein at least one image display parameter of any two groups of image display parameters is different;
and respectively carrying out image synthesis on the M processed images according to each group of image display parameters in the N groups of image display parameters, and generating and displaying N reference images.
2. The method of claim 1, wherein generating M processed images from the M raw images comprises:
and respectively carrying out reduction processing on each original image in the M original images to generate M processed images.
3. The method of claim 1,
the obtaining of the N groups of image display parameters includes:
calling a parameter generation model to generate N groups of image display parameters;
after receiving the input operation on the target reference image in the N reference images, the method further includes:
comparing the image display parameters of the target reference image with the image display parameters of other reference images to obtain a comparison result; wherein the other reference images are reference images of the N reference images except the target reference image;
and adjusting the model parameters of the parameter generation model according to the comparison result.
4. The method of claim 1,
before the image synthesis of the M original images, the method further includes:
carrying out alignment processing and motion compensation processing on the M original images;
the image synthesis of the M original images comprises:
and performing image synthesis on the M original images subjected to the alignment processing and the motion compensation processing.
5. A terminal device, characterized in that the terminal device comprises:
the acquisition module is used for acquiring M original images; wherein M is an integer greater than or equal to 2;
the first processing module is used for generating and displaying N reference images according to at least one original image in the M original images; wherein N is an integer greater than or equal to 2, and at least one image display parameter of any two reference images is different;
the receiving module is used for receiving input operation of a target reference image in the N reference images;
the synthesis module is used for responding to the input operation, and performing image synthesis on the M original images according to the image display parameters of the target reference image to generate a target image;
the first processing module comprises:
a generating unit for generating M processed images according to the M original images;
an obtaining unit, configured to obtain N sets of image display parameters; wherein at least one image display parameter of any two groups of image display parameters is different;
and the processing unit is used for carrying out image synthesis on the M processed images according to each group of image display parameters in the N groups of image display parameters respectively, and generating and displaying N reference images.
6. The terminal device according to claim 5, wherein the generating unit is specifically configured to:
and respectively carrying out reduction processing on each original image in the M original images to generate M processed images.
7. The terminal device of claim 5,
the obtaining unit is specifically configured to:
calling a parameter generation model to generate N groups of image display parameters;
the terminal device further includes:
the comparison module is used for comparing the image display parameters of the target reference image with the image display parameters of other reference images after receiving the input operation of the target reference image in the N reference images to obtain a comparison result; wherein the other reference images are reference images of the N reference images except the target reference image;
and the adjusting module is used for adjusting the model parameters of the parameter generation model according to the comparison result.
8. The terminal device of claim 5,
the terminal device further includes:
the second processing module is used for carrying out alignment processing and motion compensation processing on the M original images before carrying out image synthesis on the M original images;
the synthesis module is specifically configured to:
and performing image synthesis on the M original images subjected to the alignment processing and the motion compensation processing.
9. A terminal device, characterized in that it comprises a processor, a memory, a computer program stored on said memory and executable on said processor, said computer program realizing the steps of the image processing method according to any one of claims 1 to 4 when executed by said processor.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 4.
CN201811622123.3A 2018-12-28 2018-12-28 Image processing method, terminal equipment and computer readable storage medium Active CN109729269B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811622123.3A CN109729269B (en) 2018-12-28 2018-12-28 Image processing method, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811622123.3A CN109729269B (en) 2018-12-28 2018-12-28 Image processing method, terminal equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109729269A CN109729269A (en) 2019-05-07
CN109729269B true CN109729269B (en) 2020-10-30

Family

ID=66297504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811622123.3A Active CN109729269B (en) 2018-12-28 2018-12-28 Image processing method, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109729269B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012082A (en) * 2021-02-09 2021-06-22 北京字跳网络技术有限公司 Image display method, apparatus, device and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102761695A (en) * 2011-04-28 2012-10-31 佳能株式会社 Imaging apparatus and control method thereof
CN103561205A (en) * 2013-11-15 2014-02-05 深圳市中兴移动通信有限公司 Shooting method and shooting device
CN104243822A (en) * 2014-09-12 2014-12-24 广州三星通信技术研究有限公司 Method and device for shooting images
CN104869314A (en) * 2015-05-28 2015-08-26 小米科技有限责任公司 Photographing method and device
CN105453134A (en) * 2013-08-12 2016-03-30 三星电子株式会社 A method and apparatus for dynamic range enhancement of an image
CN105847703A (en) * 2016-03-28 2016-08-10 联想(北京)有限公司 Image processing method and electronic device
CN107483836A (en) * 2017-09-27 2017-12-15 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107809593A (en) * 2017-11-13 2018-03-16 广东欧珀移动通信有限公司 Method, apparatus, terminal and the storage medium of shooting image
CN108307109A (en) * 2018-01-16 2018-07-20 维沃移动通信有限公司 A kind of high dynamic range images method for previewing and terminal device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150051085A (en) * 2013-11-01 2015-05-11 삼성전자주식회사 Method for obtaining high dynamic range image,Computer readable storage medium of recording the method and a digital photographing apparatus.
US9357127B2 (en) * 2014-03-18 2016-05-31 Google Technology Holdings LLC System for auto-HDR capture decision making

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102761695A (en) * 2011-04-28 2012-10-31 佳能株式会社 Imaging apparatus and control method thereof
CN105453134A (en) * 2013-08-12 2016-03-30 三星电子株式会社 A method and apparatus for dynamic range enhancement of an image
CN103561205A (en) * 2013-11-15 2014-02-05 深圳市中兴移动通信有限公司 Shooting method and shooting device
CN104243822A (en) * 2014-09-12 2014-12-24 广州三星通信技术研究有限公司 Method and device for shooting images
CN104869314A (en) * 2015-05-28 2015-08-26 小米科技有限责任公司 Photographing method and device
CN105847703A (en) * 2016-03-28 2016-08-10 联想(北京)有限公司 Image processing method and electronic device
CN107483836A (en) * 2017-09-27 2017-12-15 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107809593A (en) * 2017-11-13 2018-03-16 广东欧珀移动通信有限公司 Method, apparatus, terminal and the storage medium of shooting image
CN108307109A (en) * 2018-01-16 2018-07-20 维沃移动通信有限公司 A kind of high dynamic range images method for previewing and terminal device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《基于SoC-FPGA的高动态范围图像合成》;张云;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170331;第I138-4859页 *

Also Published As

Publication number Publication date
CN109729269A (en) 2019-05-07

Similar Documents

Publication Publication Date Title
CN107566739B (en) photographing method and mobile terminal
CN110740259B (en) Video processing method and electronic equipment
CN109361865B (en) Shooting method and terminal
CN108307109B (en) High dynamic range image preview method and terminal equipment
CN109688322B (en) Method and device for generating high dynamic range image and mobile terminal
CN110365907B (en) Photographing method and device and electronic equipment
CN109218626B (en) Photographing method and terminal
CN109361867B (en) Filter processing method and mobile terminal
CN110602401A (en) Photographing method and terminal
CN108280817B (en) Image processing method and mobile terminal
CN108449541B (en) Panoramic image shooting method and mobile terminal
CN111405199B (en) Image shooting method and electronic equipment
CN108848309B (en) Camera program starting method and mobile terminal
CN107730460B (en) Image processing method and mobile terminal
CN111147752B (en) Zoom factor adjusting method, electronic device, and medium
CN109474784B (en) Preview image processing method and terminal equipment
CN109104578B (en) Image processing method and mobile terminal
CN109246351B (en) Composition method and terminal equipment
CN111182211B (en) Shooting method, image processing method and electronic equipment
CN110086998B (en) Shooting method and terminal
CN111131722A (en) Image processing method, electronic device, and medium
CN109639981B (en) Image shooting method and mobile terminal
CN109167917B (en) Image processing method and terminal equipment
CN107817963B (en) Image display method, mobile terminal and computer readable storage medium
CN107734269B (en) Image processing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant