CN111182236A - Image synthesis method and device, storage medium and terminal equipment - Google Patents
Image synthesis method and device, storage medium and terminal equipment Download PDFInfo
- Publication number
- CN111182236A CN111182236A CN202010005247.8A CN202010005247A CN111182236A CN 111182236 A CN111182236 A CN 111182236A CN 202010005247 A CN202010005247 A CN 202010005247A CN 111182236 A CN111182236 A CN 111182236A
- Authority
- CN
- China
- Prior art keywords
- image
- transparency
- target
- camera
- synthesis method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
The application provides an image synthesis method, an image synthesis device, a storage medium and a terminal device, wherein the image synthesis method is applied to a mobile terminal, the mobile terminal comprises a first camera and a second camera, the image synthesis method comprises the steps of obtaining a first image shot by the first camera and a second image shot by the second camera, the first image comprises at least one target shooting object, performing transparency adjustment on the first image according to the target shooting object, and then generating the first image with the adjusted transparency on the second image to obtain a synthesized image comprising the target shooting object, so that a terminal user can obtain the synthesized image in the shooting process, and the operation complexity of the terminal user is reduced.
Description
Technical Field
The present application relates to the field of communications technologies, and in particular, to an image synthesis method, an image synthesis apparatus, a storage medium, and a terminal device.
Background
With the development of mobile terminal technology, the photographing function of the mobile terminal is also more and more powerful, and more terminal users like to synthesize photographed pictures to obtain pictures meeting special requirements of the users.
However, in the prior art, if a terminal user wants to synthesize a photographed picture, a software with an image synthesis function must be installed, and the terminal user inputs the picture into the software to perform corresponding processing, so as to obtain a synthesized image, so that the terminal user cannot obtain the synthesized image in the photographing process.
Disclosure of Invention
The application provides an image synthesis method, an image synthesis device, a storage medium and a terminal device, which effectively solve the problem that a terminal user cannot obtain a synthesized image in the process of photographing.
In order to solve the above problem, an embodiment of the present application provides an image synthesis method, which is applied to a mobile terminal, where the mobile terminal includes a first camera and a second camera, and the image synthesis method includes:
acquiring a first image shot by the first camera and a second image shot by the second camera, wherein the first image comprises at least one target shooting object;
performing transparency adjustment on the first image according to the target shooting object;
and generating the first image with the adjusted transparency on the second image to obtain a composite image containing the target shooting object.
In the image synthesis method provided by the present application, the step of adjusting the transparency of the first image according to the target shooting object specifically includes:
determining a display area of each target shooting object on the first image;
and adjusting the transparency of the area except the display area in the first image to be a first preset transparency.
In the image synthesis method provided by the present application, after the step of determining a display area of each of the target photographic objects on the first image, the method may further include:
and adjusting the transparency of the display area to a second preset transparency.
In the image synthesis method provided by the present application, after the step of generating the transparency-adjusted first image on the second image, the method may further include:
displaying the composite image to a user on a current preview interface, wherein the current preview interface is provided with a plurality of beautifying keys;
acquiring a target beautifying key selected by a user from the beautifying keys;
and performing corresponding beautifying operation on the synthesized image according to the target beautifying key.
In the image synthesis method provided by the present application, the step of generating the transparency-adjusted first image on the second image specifically includes:
acquiring a first shooting focal length of the first image and a second shooting focal length of the second image;
adjusting the size of the first image with the adjusted transparency according to the first shooting focal length and the second shooting focal length to obtain a target image;
generating the target image on the second image.
In the image synthesis method provided by the present application, the step of generating the transparency-adjusted first image on the second image specifically includes:
acquiring first image brightness of the first image and second image brightness of the second image;
adjusting the image brightness of the first image with the adjusted transparency according to the first image brightness and the second image brightness to obtain a target image;
generating the target image on the second image.
In order to solve the above problem, an embodiment of the present application further provides an image synthesis apparatus applied to a mobile terminal, where the mobile terminal includes a first camera and a second camera, and the image synthesis apparatus includes:
the acquisition module is used for acquiring a first image shot by the first camera and a second image shot by the second camera, wherein the first image comprises at least one target shooting object;
the setting module is used for carrying out transparency adjustment on the first image according to the target shooting object;
and the generation module is used for generating the first image with the adjusted transparency on the second image so as to obtain a composite image containing the target shooting object.
In the image synthesizing apparatus provided by the present application, the setting module specifically includes:
a determination unit configured to determine a display area of each of the target photographic objects on the first image;
and the setting unit is used for adjusting the transparency of the area except the display area in the first image to be a first preset transparency.
In the image synthesizing apparatus provided by the present application, the image synthesizing apparatus further includes a setting subunit operable to:
and adjusting the transparency of the display area to a second preset transparency.
In the image synthesizing apparatus provided by the present application, the image synthesizing apparatus further includes a beautification module configured to:
displaying the composite image to a user on a current preview interface, wherein the current preview interface is provided with a plurality of beautifying keys;
acquiring a target beautifying key selected by a user from the beautifying keys;
and performing corresponding beautifying operation on the synthesized image according to the target beautifying key.
In the image synthesizing apparatus provided by the present application, the image synthesizing apparatus further includes a first adjusting unit configured to:
acquiring a first shooting focal length of the first image and a second shooting focal length of the second image;
adjusting the size of the first image with the adjusted transparency according to the first shooting focal length and the second shooting focal length to obtain a target image;
generating the target image on the second image.
In the image synthesizing apparatus provided by the present application, the image synthesizing apparatus further includes a second adjusting unit configured to:
acquiring first image brightness of the first image and second image brightness of the second image;
adjusting the image brightness of the first image with the adjusted transparency according to the first image brightness and the second image brightness to obtain a target image;
generating the target image on the second image.
In order to solve the above problem, an embodiment of the present application further provides a computer-readable storage medium, in which a plurality of instructions are stored, and the instructions are adapted to be loaded by a processor to execute any one of the image synthesis methods described above.
In order to solve the above problem, an embodiment of the present application further provides a terminal device, which includes a processor and a memory, where the processor is electrically connected to the memory, the memory is used to store instructions and data, and the processor is used to execute the steps in the image synthesis method according to any one of the above descriptions.
The beneficial effect of this application does: the invention is different from the prior art, and provides an image synthesis method, an apparatus, a storage medium and a terminal device, wherein the image synthesis method is applied to a mobile terminal, the mobile terminal comprises a first camera and a second camera, the image synthesis method obtains a first image shot by the first camera and a second image shot by the second camera, the first image comprises at least one target shooting object, transparency adjustment is performed on the first image according to the target shooting object, and then the transparency-adjusted first image is generated on the second image to obtain a synthesized image comprising the target shooting object, so that a terminal user can obtain the synthesized image in the shooting process, and the operation complexity of the terminal user is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image synthesis method according to an embodiment of the present application.
Fig. 2 is another schematic flow chart of the image synthesis method according to the embodiment of the present application.
Fig. 3-a is a schematic view of an application scenario of the image synthesis method according to the embodiment of the present application.
Fig. 3-b is a schematic view of another application scenario of the image synthesis method according to the embodiment of the present application.
Fig. 4 is a schematic structural diagram of an image synthesis apparatus according to an embodiment of the present application.
Fig. 5 is another schematic structural diagram of an image synthesis apparatus according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Fig. 7 is another schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides an image synthesis method, an image synthesis device, a storage medium and terminal equipment.
Referring to fig. 1, fig. 1 is a schematic flowchart of an image synthesis method provided in an embodiment of the present application, where the image synthesis method is applied to a mobile terminal, the mobile terminal includes a first camera and a second camera, and the mobile terminal may be any intelligent electronic device with a mobile communication function, such as a smart phone, a tablet computer, a notebook computer, and the like. The specific flow of the image synthesis method provided by this embodiment may be as follows:
s101, a first image shot by a first camera and a second image shot by a second camera are obtained, wherein the first image comprises at least one target shooting object.
In this embodiment, the first camera and the second camera may be a front camera, a rear camera, or both, and when the first camera and the second camera are the front camera or the rear camera, the shooting views of the first camera and the second camera are different.
It is easily understood that, in the present embodiment, this step is the basis of all subsequent operations, and the first image captured by the first camera includes the target object desired by the end user and other objects.
And S102, transparency adjustment is carried out on the first image according to the target shooting object.
Further, the step S102 may specifically include:
determining a display area of each target shooting object on the first image;
and adjusting the transparency of the area except the display area in the first image to be a first preset transparency.
In this embodiment, the terminal identifies all the target photographic objects on the first image through the neural network learning model by inputting the acquired first image into the neural network learning model, and determines the display area of each target photographic object on the first image.
In this embodiment, the first preset transparency may be set manually, for example, 100%, and the terminal may highlight the target object with the best effect.
It should be noted that, in order to make the presentation effect of the target photographic object more realistic, the transparency of the display area may be converted.
Further, after the step of determining the display area of each target photographic object on the first image, the method may further include:
and adjusting the transparency of the display area to a second preset transparency.
In this embodiment, the relative value of the first transparency is greater than the relative value of the second transparency, and when the terminal sets the transparency of the display region, the value of the second preset transparency may be determined according to the color saturation of the display region, and generally, the higher the color saturation of the display region is, the larger the value of the second preset transparency may be, and the lower the color saturation of the display region is, the smaller the value of the second preset transparency may be.
And S103, generating the transparency-adjusted first image on the second image to obtain a composite image containing the target shooting object.
It should be noted that, when generating the composite image, the first image may be processed based on the shooting focal length and/or the image brightness of the first image, so as to make the rendering effect of the composite image more realistic.
For example, the step S103 may specifically include:
acquiring a first shooting focal length of a first image and a second shooting focal length of a second image;
adjusting the size of the first image with the adjusted transparency according to the first shooting focal length and the second shooting focal length to obtain a target image;
a target image is generated on the second image.
When the same object is photographed, the larger the photographing focal length is, the larger the occupation range of the object in the photographing picture is (i.e. the larger the image is), and the smaller the photographing focal length is, the smaller the occupation range of the object in the photographing picture is (i.e. the smaller the image is).
In this embodiment, in order to obtain a more beautiful composite image in the subsequent image composition step, an adjustment ratio is set according to the first photographing focal length and the second photographing focal length, and the size of the first image is adjusted to the optimal size by adjusting the ratio, so that when the first image with the adjusted transparency is generated on the second image, the image does not appear to be obtrusive.
Wherein, the step S103 may further include:
acquiring first image brightness of a first image and second image brightness of a second image;
adjusting the image brightness of the first image after the transparency is adjusted according to the first image brightness and the second image brightness to obtain a target image;
a target image is generated on the second image.
In this embodiment, it is easy to understand that when the first camera takes the first image and the second camera takes the second image, the brightness of the first image is different from that of the second image due to different ambient light, and the stronger the ambient light is, the higher the brightness of the image is, the weaker the ambient light is, and the lower the brightness of the image is.
In this embodiment, in order to obtain a more beautiful composite image in the subsequent image composition step, the brightness of the second image is adjusted according to the brightness of the second image, so as to ensure that the image is more harmonious when the target image is generated on the second image.
Specifically, when the target image with the adjusted image brightness is generated on the second image, the terminal determines the generation area on the second image according to the position of the target shooting object on the target image.
Further, after determining the generated region, the terminal may adjust the transparency of the generated region to a third preset transparency according to the color saturation of the generated region, so that the composite image is more attractive, wherein the relative value of the first transparency is greater than the relative value of the third transparency, and the relative value of the second transparency is greater than the relative value of the third transparency.
The step of adjusting the image size according to the shooting focal length and the step of adjusting the first image brightness according to the second image brightness may be performed simultaneously, or only the step of adjusting the image size according to the shooting focal length or only the step of adjusting the first image brightness according to the second image brightness may be performed.
In addition, after the image is synthesized, the user may further process the synthesized image, that is, after step S103, the method may further include:
displaying the composite image to a user on a current preview interface, wherein the current preview interface is provided with a plurality of beautifying keys;
acquiring a target beautifying key selected by a user from a plurality of beautifying keys;
and performing corresponding beautifying operation on the synthesized image according to the target beautifying key.
In this embodiment, when the terminal displays the composite image to the user on the current preview interface, a plurality of beautifying keys are provided on the preview interface, so that the user can perform corresponding beautifying operations on the composite image. The beautification key may instruct the terminal to perform gray level setting or color rendering (such as adding a filter) on the synthesized image, and the like.
It is easy to understand that the display mode of the beautification key can be pictures and/or characters corresponding to the effect, and the terminal user can select the required target beautification key by specifying gestures, voice and the like.
Specifically, when the user selects to perform gray level setting on the composite image, the terminal determines a display area of each target photographic object on the composite image, and then sets the gray level of an area except the display area in the composite image as a preset gray level.
Referring to fig. 2, fig. 2 is another schematic flow chart of an image synthesis method according to an embodiment of the present disclosure, where the image synthesis method is applied to a mobile terminal, where the mobile terminal includes a first camera and a second camera, and the mobile terminal may be any intelligent electronic device with a mobile communication function, such as a smart phone, a tablet computer, a notebook computer, and the like. The specific flow of the image synthesis method provided by this embodiment may be as follows:
s201, a first image shot by a first camera and a second image shot by a second camera are obtained, wherein the first image comprises at least one target shooting object.
In this embodiment, the first camera and the second camera may be a front camera, a rear camera, or both, and when the first camera and the second camera are the front camera or the rear camera, the shooting views of the first camera and the second camera are different.
It is easily understood that, in the present embodiment, this step is the basis of all subsequent operations, and the first image captured by the first camera includes the target object desired by the end user and other objects. For example, referring to fig. 3-a, the first camera and the second camera may be a front camera and a rear camera of the smartphone of fig. 3-a, the front camera takes a first image including a girl, and the radio takes an object in a second image taken by the rear camera.
S202, determining a display area of each target shooting object on the first image.
In this embodiment, the terminal identifies all the target photographic objects on the first image through the neural network learning model by inputting the acquired first image into the neural network learning model, and determines the display area of each target photographic object on the first image.
S203, the transparency of the area except the display area in the first image is adjusted to be a first preset transparency, and the transparency of the display area is adjusted to be a second preset transparency.
In this embodiment, the first preset transparency may be set manually, for example, 100%, and the terminal may highlight the target object with the best effect. And the relative value of the first transparency is greater than the relative value of the second transparency, when the terminal sets the transparency of the display area, the value of the second preset transparency can be determined according to the color saturation of the display area, generally, the higher the color saturation of the display area is, the larger the value of the second preset transparency can be, the lower the color saturation of the display area is, and the smaller the value of the second preset transparency can be. For example, referring to fig. 3-a, the smartphone adjusts the transparency of the region other than the girl image to 100% to highlight the girl image with the best effect.
And S204, acquiring a first shooting focal length of the first image and a second shooting focal length of the second image.
The terminal performs the step of adjusting the image size according to the shooting focal length first, and then performs the step of adjusting the first image brightness according to the second image brightness.
When the same object is photographed, the larger the photographing focal length is, the larger the occupation range of the object in the photographing picture is (i.e. the larger the image is), and the smaller the photographing focal length is, the smaller the occupation range of the object in the photographing picture is (i.e. the smaller the image is).
S205, adjusting the size of the first image after transparency adjustment according to the first shooting focal length and the second shooting focal length to obtain a target image.
In this embodiment, in order to obtain a more beautiful composite image in the subsequent image composition step, an adjustment ratio is set according to the first photographing focal length and the second photographing focal length, and the size of the first image is adjusted to the optimal size by adjusting the ratio, so that when the first image with the adjusted transparency is generated on the second image, the image does not appear to be obtrusive. For example, referring to fig. 3-a, the smart phone resizes the transparency-adjusted first image according to the 2:1 adjustment ratio to obtain an image of the girl with a size reduced by one time.
S206, acquiring first image brightness of the target image and second image brightness of the second image.
In this embodiment, it is easy to understand that when the first camera takes the first image and the second camera takes the second image, the brightness of the first image is different from that of the second image due to different ambient light, and the stronger the ambient light is, the higher the brightness of the image is, the weaker the ambient light is, and the lower the brightness of the image is.
And S207, adjusting the brightness of the first image according to the brightness of the second image.
In this embodiment, in order to obtain a more beautiful composite image in the subsequent image composition step, the brightness of the second image is adjusted according to the brightness of the second image, so as to ensure that the image is more harmonious when the target image is generated on the second image.
And S208, generating the target image with the adjusted image brightness on the second image to obtain a composite image containing the target shooting object.
Specifically, when the target image with the adjusted image brightness is generated on the second image, the terminal determines the generation area on the second image according to the position of the target shooting object on the target image.
Further, after determining the generated region, the terminal may adjust the transparency of the generated region to a third preset transparency according to the color saturation of the generated region, so that the composite image is more attractive, wherein the relative value of the first transparency is greater than the relative value of the third transparency, and the relative value of the second transparency is greater than the relative value of the third transparency.
And S209, displaying the composite image to a user on a current preview interface, wherein the current preview interface is provided with a plurality of beautifying keys.
In this embodiment, when the terminal displays the composite image to the user on the current preview interface, a plurality of beautifying keys are provided on the preview interface, so that the user can perform corresponding beautifying operations on the composite image. The beautification key may instruct the terminal to perform gray level setting or color rendering (such as adding a filter) on the synthesized image, and the like.
It is easy to understand that the display mode of the beautification key can be pictures and/or characters corresponding to the effect, and the terminal user can select the required target beautification key by specifying gestures, voice and the like.
S210, acquiring a target beautifying key selected by a user from the plurality of beautifying keys, and performing corresponding beautifying operation on the synthesized image according to the target beautifying key.
Specifically, when the user selects to perform gray level setting on the composite image, the terminal determines a display area of each target photographic object on the composite image, and then sets the gray level of an area except the display area in the composite image as a preset gray level. For example, referring to fig. 3-b, fig. 3-b is a composite image with gradation settings obtained by the present image synthesis method at the terminal.
Therefore, different from the prior art, the present application provides an image synthesis method, an apparatus, a storage medium, and a terminal device, where the image synthesis method is applied to a mobile terminal, and the mobile terminal includes a first camera and a second camera, and the image synthesis method obtains a first image captured by the first camera and a second image captured by the second camera, where the first image includes at least one target object, and obtains a synthesized image including the target object by performing transparency adjustment on the first image according to the target object and then generating the transparency-adjusted first image on the second image, so that a terminal user can obtain the synthesized image during shooting, and complexity of operation of the terminal user is reduced.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an image synthesis apparatus provided in the embodiment of the present application, which is applied to a mobile terminal, where the mobile terminal includes a first camera and a second camera, and the mobile terminal may be any intelligent electronic device with a mobile communication function, such as a smart phone, a tablet computer, a notebook computer, and the like. The image synthesizing apparatus provided by the present embodiment may include: an obtaining module 10, a setting module 20 and a generating module 30, wherein:
(1) acquisition module 10
The acquiring module 10 is configured to acquire a first image captured by a first camera and a second image captured by a second camera, where the first image includes at least one target object.
In this embodiment, the first camera and the second camera may be a front camera, a rear camera, or both, and when the first camera and the second camera are the front camera or the rear camera, the shooting views of the first camera and the second camera are different.
It is easily understood that, in the present embodiment, this step is the basis of all subsequent operations, and the first image captured by the first camera includes the target object desired by the end user and other objects.
(2) Setting module 20
And the setting module 20 is configured to perform transparency adjustment on the first image according to the target shooting object.
Further, referring to fig. 5, fig. 5 is another schematic structural diagram of the image synthesizing apparatus according to the embodiment of the present application, and the setting module 20 specifically includes:
a determination unit 21 for determining a display area of each target photographic object on the first image;
the setting unit 22 is configured to adjust the transparency of the region of the first image other than the display region to a first preset transparency.
In this embodiment, the terminal identifies all the target photographic objects on the first image through the neural network learning model by inputting the acquired first image into the neural network learning model, and determines the display area of each target photographic object on the first image.
In this embodiment, the first preset transparency may be set manually, for example, 100%, and the terminal may highlight the target object with the best effect.
It should be noted that, in order to make the presentation effect of the target photographic object more realistic, the transparency of the display area may be converted.
For example, the image synthesizing apparatus may further include a setting subunit operable to:
and adjusting the transparency of the display area to a second preset transparency.
In this embodiment, the relative value of the first transparency is greater than the relative value of the second transparency, and when the terminal sets the transparency of the display region, the value of the second preset transparency may be determined according to the color saturation of the display region, and generally, the higher the color saturation of the display region is, the larger the value of the second preset transparency may be, and the lower the color saturation of the display region is, the smaller the value of the second preset transparency may be.
(3) Generation module 30
And a generating module 30, configured to generate the transparency-adjusted first image on the second image to obtain a composite image including the target photographic object.
It should be noted that, when generating the composite image, the first image may be processed based on the shooting focal length and/or the image brightness of the first image, so as to make the rendering effect of the composite image more realistic.
For example, the image synthesizing apparatus may further include a first adjusting unit configured to:
acquiring a first shooting focal length of a first image and a second shooting focal length of a second image;
adjusting the size of the first image with the adjusted transparency according to the first shooting focal length and the second shooting focal length to obtain a target image;
a target image is generated on the second image.
When the same object is photographed, the larger the photographing focal length is, the larger the occupation range of the object in the photographing picture is (i.e. the larger the image is), and the smaller the photographing focal length is, the smaller the occupation range of the object in the photographing picture is (i.e. the smaller the image is).
In this embodiment, in order to obtain a more beautiful composite image in the subsequent image composition step, an adjustment ratio is set according to the first photographing focal length and the second photographing focal length, and the size of the first image is adjusted to the optimal size by adjusting the ratio, so that when the first image with the adjusted transparency is generated on the second image, the image does not appear to be obtrusive.
Further, the image synthesizing apparatus may further include a second adjusting unit operable to:
acquiring first image brightness of a first image and second image brightness of a second image;
adjusting the image brightness of the first image after the transparency is adjusted according to the first image brightness and the second image brightness to obtain a target image;
a target image is generated on the second image.
In this embodiment, it is easy to understand that when the first camera takes the first image and the second camera takes the second image, the brightness of the first image is different from that of the second image due to different ambient light, and the stronger the ambient light is, the higher the brightness of the image is, the weaker the ambient light is, and the lower the brightness of the image is.
In this embodiment, in order to obtain a more beautiful composite image in the subsequent image composition step, the brightness of the second image is adjusted according to the brightness of the second image, so as to ensure that the image is more harmonious when the target image is generated on the second image.
Specifically, when the target image with the adjusted image brightness is generated on the second image, the terminal determines the generation area on the second image according to the position of the target shooting object on the target image.
Further, after determining the generated region, the terminal may adjust the transparency of the generated region to a third preset transparency according to the color saturation of the generated region, so that the composite image is more attractive, wherein the relative value of the first transparency is greater than the relative value of the third transparency, and the relative value of the second transparency is greater than the relative value of the third transparency.
In addition, after the image is synthesized, the user may further process the synthesized image, that is, the image synthesizing apparatus may further include a beautification module for:
displaying the composite image to a user on a current preview interface, wherein the current preview interface is provided with a plurality of beautifying keys;
acquiring a target beautifying key selected by a user from a plurality of beautifying keys;
and performing corresponding beautifying operation on the synthesized image according to the target beautifying key.
In this embodiment, when the terminal displays the composite image to the user on the current preview interface, a plurality of beautifying keys are provided on the preview interface, so that the user can perform corresponding beautifying operations on the composite image. The beautification key may instruct the terminal to perform gray level setting or color rendering (such as adding a filter) on the synthesized image, and the like.
It is easy to understand that the display mode of the beautification key can be pictures and/or characters corresponding to the effect, and the terminal user can select the required target beautification key by specifying gestures, voice and the like.
Specifically, when the user selects to perform gray level setting on the composite image, the terminal determines a display area of each target photographic object on the composite image, and then sets the gray level of an area except the display area in the composite image as a preset gray level.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
From the foregoing, it can be seen that the present application provides an image synthesis method, apparatus, storage medium, and terminal device, the image synthesis method is applied to the mobile terminal, and the mobile terminal comprises a first camera and a second camera, the image synthesis method obtains a first image shot by a first camera and a second image shot by a second camera through an obtaining module 10, wherein the first image comprises at least one target shooting object, the transparency adjustment of the first image is performed by the setting module 20 according to the target photographic object, and then, the transparency-adjusted first image is generated on the second image by the generation module 30, to obtain a composite image containing the target photographic subject, therefore, the terminal user can obtain the composite image in the process of photographing, and the operation complexity of the terminal user is reduced.
In addition, the embodiment of the application further provides a terminal device, and the terminal device can be a smart phone, a tablet computer and other devices. As shown in fig. 6, the terminal device 200 includes a processor 201 and a memory 202. The processor 201 is electrically connected to the memory 202.
The processor 201 is a control center of the terminal device 200, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or loading an application program stored in the memory 202 and calling data stored in the memory 202, thereby performing overall monitoring of the terminal device.
In this embodiment, the terminal device 200 is provided with a plurality of memory partitions, the plurality of memory partitions includes a system partition and a target partition, the processor 201 in the terminal device 200 loads instructions corresponding to processes of one or more application programs into the memory 202 according to the following steps, and the processor 201 runs the application programs stored in the memory 202, so as to implement various functions:
acquiring a first image shot by a first camera and a second image shot by a second camera, wherein the first image comprises at least one target shooting object;
adjusting the transparency of the first image according to the target shooting object;
and generating the transparency-adjusted first image on the second image to obtain a composite image containing the target shooting object.
Fig. 7 is a block diagram showing a specific structure of a terminal device according to an embodiment of the present invention, which can be used to implement the image synthesis method provided in the above-described embodiment. The terminal device 300 may be a smart phone or a tablet computer.
The RF circuit 310 is used for receiving and transmitting electromagnetic waves, and performing interconversion between the electromagnetic waves and electrical signals, thereby communicating with a communication network or other devices. RF circuitry 310 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. RF circuit 310 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices over a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols and technologies, including but not limited to Global System for Mobile Communication (GSM), Enhanced Mobile Communication (EDGE), Wideband Code Division Multiple Access (WCDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (Wi-Fi) (e.g., IEEE802.11 a, IEEE802.11b, IEEE802.11g and/or IEEE802.11 n), Voice over Internet Protocol (VoIP), world wide Microwave Access (Microwave for Wireless), Max-1, and other short message protocols, as well as any other suitable communication protocols, and may even include those that have not yet been developed.
The memory 320 may be configured to store software programs and modules, such as program instructions/modules corresponding to the automatic light supplement system and method for front-facing camera photographing in the foregoing embodiments, and the processor 380 executes various functional applications and data processing by running the software programs and modules stored in the memory 320, so as to implement the function of automatic light supplement for front-facing camera photographing. The memory 320 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 320 may further include memory located remotely from processor 380, which may be connected to terminal device 300 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input unit 330 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 330 may include a touch-sensitive surface 331 as well as other input devices 332. The touch-sensitive surface 331, also referred to as a touch screen or touch pad, may collect touch operations by a user on or near the touch-sensitive surface 331 (e.g., operations by a user on or near the touch-sensitive surface 331 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 331 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 380, and can receive and execute commands sent by the processor 380. In addition, the touch-sensitive surface 331 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 330 may comprise other input devices 332 in addition to the touch sensitive surface 331. In particular, other input devices 332 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 340 may be used to display information input by or provided to the user and various graphic user interfaces of the terminal apparatus 300, which may be configured by graphics, text, icons, video, and any combination thereof. The Display unit 340 may include a Display panel 341, and optionally, the Display panel 341 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, touch-sensitive surface 331 may overlay display panel 341, and when touch-sensitive surface 331 detects a touch operation thereon or thereabout, communicate to processor 380 to determine the type of touch event, and processor 380 then provides a corresponding visual output on display panel 341 in accordance with the type of touch event. Although in FIG. 7, touch-sensitive surface 331 and display panel 341 are implemented as two separate components for input and output functions, in some embodiments, touch-sensitive surface 331 and display panel 341 may be integrated for input and output functions.
The terminal device 300 may also include at least one sensor 350, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 341 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 341 and/or the backlight when the terminal device 300 is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal device 300, detailed descriptions thereof are omitted.
The terminal device 300 may assist the user in e-mail, web browsing, streaming media access, etc. through the transmission module 370 (e.g., a Wi-Fi module), which provides the user with wireless broadband internet access. Although fig. 7 shows the transmission module 370, it is understood that it does not belong to the essential constitution of the terminal device 300, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 380 is a control center of the terminal device 300, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the terminal device 300 and processes data by running or executing software programs and/or modules stored in the memory 320 and calling data stored in the memory 320, thereby performing overall monitoring of the mobile phone. Optionally, processor 380 may include one or more processing cores; in some embodiments, processor 380 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 380.
Terminal device 300 also includes a power supply 390 (e.g., a battery) for powering the various components, which may be logically coupled to processor 380 via a power management system in some embodiments to manage charging, discharging, and power consumption management functions via the power management system. The power supply 390 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the terminal device 300 may further include a camera (e.g., a front camera, a rear camera), a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the display unit of the terminal device is a touch screen display, the terminal device further includes a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs include instructions for:
acquiring a first image shot by a first camera and a second image shot by a second camera, wherein the first image comprises at least one target shooting object;
adjusting the transparency of the first image according to the target shooting object;
and generating the transparency-adjusted first image on the second image to obtain a composite image containing the target shooting object.
In specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the above modules may refer to the foregoing method embodiments, which are not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor. To this end, embodiments of the present invention provide a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute steps in any one of the image synthesis methods provided by the embodiments of the present invention.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any image synthesis method provided by the embodiment of the present invention, the beneficial effects that can be achieved by any image synthesis method provided by the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
In addition to the above embodiments, other embodiments are also possible. All technical solutions formed by using equivalents or equivalent substitutions fall within the protection scope of the claims of the present application.
In summary, although the present application has been described with reference to the preferred embodiments, the above-described preferred embodiments are not intended to limit the present application, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present application, so that the scope of the present application shall be determined by the appended claims.
Claims (10)
1. An image synthesis method is applied to a mobile terminal, the mobile terminal comprises a first camera and a second camera, and the image synthesis method comprises the following steps:
acquiring a first image shot by the first camera and a second image shot by the second camera, wherein the first image comprises at least one target shooting object;
performing transparency adjustment on the first image according to the target shooting object;
and generating the first image with the adjusted transparency on the second image to obtain a composite image containing the target shooting object.
2. The image synthesis method according to claim 1, wherein the step of adjusting the transparency of the first image according to the target photographic object specifically includes:
determining a display area of each target shooting object on the first image;
and adjusting the transparency of the area except the display area in the first image to be a first preset transparency.
3. The image synthesis method according to claim 2, wherein the step of determining the display area of each of the target photographic objects on the first image is followed by further comprising:
and adjusting the transparency of the display area to a second preset transparency.
4. The image synthesis method according to claim 1, wherein the step of generating the transparency-adjusted first image on the second image is followed by further comprising:
displaying the composite image to a user on a current preview interface, wherein the current preview interface is provided with a plurality of beautifying keys;
acquiring a target beautifying key selected by a user from the beautifying keys;
and performing corresponding beautifying operation on the synthesized image according to the target beautifying key.
5. The image synthesis method according to claim 1, wherein the step of generating the transparency-adjusted first image on the second image specifically includes:
acquiring a first shooting focal length of the first image and a second shooting focal length of the second image;
adjusting the size of the first image with the adjusted transparency according to the first shooting focal length and the second shooting focal length to obtain a target image;
generating the target image on the second image.
6. The image synthesis method according to claim 1, wherein the step of generating the transparency-adjusted first image on the second image specifically includes:
acquiring first image brightness of the first image and second image brightness of the second image;
adjusting the image brightness of the first image with the adjusted transparency according to the first image brightness and the second image brightness to obtain a target image;
generating the target image on the second image.
7. An image synthesis apparatus applied to a mobile terminal including a first camera and a second camera, the image synthesis apparatus comprising:
the acquisition module is used for acquiring a first image shot by the first camera and a second image shot by the second camera, wherein the first image comprises at least one target shooting object;
the setting module is used for carrying out transparency adjustment on the first image according to the target shooting object;
and the generation module is used for generating the first image with the adjusted transparency on the second image so as to obtain a composite image containing the target shooting object.
8. The image synthesis apparatus according to claim 7, wherein the setting module specifically includes:
a determination unit configured to determine a display area of each of the target photographic objects on the first image;
and the setting unit is used for adjusting the transparency of the area except the display area in the first image to be a first preset transparency.
9. A computer-readable storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor to perform the image composition method of any of claims 1 to 6.
10. A terminal device comprising a processor and a memory, the processor being electrically connected to the memory, the memory being configured to store instructions and data, the processor being configured to perform the steps of the image synthesis method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010005247.8A CN111182236A (en) | 2020-01-03 | 2020-01-03 | Image synthesis method and device, storage medium and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010005247.8A CN111182236A (en) | 2020-01-03 | 2020-01-03 | Image synthesis method and device, storage medium and terminal equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111182236A true CN111182236A (en) | 2020-05-19 |
Family
ID=70657822
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010005247.8A Pending CN111182236A (en) | 2020-01-03 | 2020-01-03 | Image synthesis method and device, storage medium and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111182236A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112672056A (en) * | 2020-12-25 | 2021-04-16 | 维沃移动通信有限公司 | Image processing method and device |
CN112991157A (en) * | 2021-03-30 | 2021-06-18 | 北京市商汤科技开发有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113012040A (en) * | 2021-03-30 | 2021-06-22 | 北京市商汤科技开发有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113873135A (en) * | 2021-11-03 | 2021-12-31 | 乐美科技股份私人有限公司 | Image obtaining method and device, electronic equipment and storage medium |
CN114189635A (en) * | 2021-12-01 | 2022-03-15 | 惠州Tcl移动通信有限公司 | Video processing method and device, mobile terminal and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103945045A (en) * | 2013-01-21 | 2014-07-23 | 联想(北京)有限公司 | Method and device for data processing |
CN106506965A (en) * | 2016-11-29 | 2017-03-15 | 努比亚技术有限公司 | A kind of image pickup method and terminal |
CN107239205A (en) * | 2017-05-03 | 2017-10-10 | 努比亚技术有限公司 | A kind of photographic method, mobile terminal and storage medium |
CN108174082A (en) * | 2017-11-30 | 2018-06-15 | 维沃移动通信有限公司 | The method and mobile terminal of a kind of image taking |
CN110365907A (en) * | 2019-07-26 | 2019-10-22 | 维沃移动通信有限公司 | A kind of photographic method, device and electronic equipment |
-
2020
- 2020-01-03 CN CN202010005247.8A patent/CN111182236A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103945045A (en) * | 2013-01-21 | 2014-07-23 | 联想(北京)有限公司 | Method and device for data processing |
CN106506965A (en) * | 2016-11-29 | 2017-03-15 | 努比亚技术有限公司 | A kind of image pickup method and terminal |
CN107239205A (en) * | 2017-05-03 | 2017-10-10 | 努比亚技术有限公司 | A kind of photographic method, mobile terminal and storage medium |
CN108174082A (en) * | 2017-11-30 | 2018-06-15 | 维沃移动通信有限公司 | The method and mobile terminal of a kind of image taking |
CN110365907A (en) * | 2019-07-26 | 2019-10-22 | 维沃移动通信有限公司 | A kind of photographic method, device and electronic equipment |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112672056A (en) * | 2020-12-25 | 2021-04-16 | 维沃移动通信有限公司 | Image processing method and device |
CN112991157A (en) * | 2021-03-30 | 2021-06-18 | 北京市商汤科技开发有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113012040A (en) * | 2021-03-30 | 2021-06-22 | 北京市商汤科技开发有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113873135A (en) * | 2021-11-03 | 2021-12-31 | 乐美科技股份私人有限公司 | Image obtaining method and device, electronic equipment and storage medium |
CN114189635A (en) * | 2021-12-01 | 2022-03-15 | 惠州Tcl移动通信有限公司 | Video processing method and device, mobile terminal and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107817939B (en) | Image processing method and mobile terminal | |
CN111182236A (en) | Image synthesis method and device, storage medium and terminal equipment | |
WO2019129020A1 (en) | Automatic focusing method of camera, storage device and mobile terminal | |
CN111142838B (en) | Audio playing method, device, computer equipment and storage medium | |
CN111857793B (en) | Training method, device, equipment and storage medium of network model | |
CN111176602B (en) | Picture display method and device, storage medium and intelligent device | |
US11165950B2 (en) | Method and apparatus for shooting video, and storage medium | |
CN108040209B (en) | Shooting method and mobile terminal | |
CN109948581B (en) | Image-text rendering method, device, equipment and readable storage medium | |
CN105635553B (en) | Image shooting method and device | |
CN111966436A (en) | Screen display control method and device, terminal equipment and storage medium | |
CN111158815B (en) | Dynamic wallpaper blurring method, terminal and computer readable storage medium | |
CN111556248B (en) | Shooting method, shooting device, storage medium and mobile terminal | |
CN110992268B (en) | Background setting method, device, terminal and storage medium | |
CN111368238A (en) | Status bar adjusting method and device, mobile terminal and storage medium | |
CN115798418A (en) | Image display method, device, terminal and storage medium | |
CN111372001B (en) | Image fusion method and device, storage medium and mobile terminal | |
CN111343335B (en) | Image display processing method, system, storage medium and mobile terminal | |
CN111355892B (en) | Picture shooting method and device, storage medium and electronic terminal | |
CN111064886B (en) | Shooting method of terminal equipment, terminal equipment and storage medium | |
CN111145723B (en) | Method, device, equipment and storage medium for converting audio | |
CN112329909B (en) | Method, apparatus and storage medium for generating neural network model | |
CN110958392A (en) | Shooting method of terminal equipment, terminal equipment and storage medium | |
CN110012229B (en) | Image processing method and terminal | |
CN111163223A (en) | Flash lamp control method and device, storage medium and terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200519 |