CN112135049B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN112135049B
CN112135049B CN202011015173.2A CN202011015173A CN112135049B CN 112135049 B CN112135049 B CN 112135049B CN 202011015173 A CN202011015173 A CN 202011015173A CN 112135049 B CN112135049 B CN 112135049B
Authority
CN
China
Prior art keywords
preview image
target
transparency
image
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011015173.2A
Other languages
Chinese (zh)
Other versions
CN112135049A (en
Inventor
韩桂敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011015173.2A priority Critical patent/CN112135049B/en
Publication of CN112135049A publication Critical patent/CN112135049A/en
Application granted granted Critical
Publication of CN112135049B publication Critical patent/CN112135049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, and belongs to the technical field of electronic equipment, wherein the method comprises the following steps: displaying a first preview image and a second preview image, wherein the first preview image is displayed on the second preview image in an overlapping manner; receiving a first input of a user; in response to the first input, adjusting a transparency of a target region of a target preview image. According to the image processing method disclosed by the application, a user can set the transparency of the target area and the transparency of the target area according to the personalized requirements, so that the target image with the double exposure effect only in the target area or with the double exposure effect only in other areas except the target area is obtained, the personalized requirements of the user can be met, and the interestingness of image processing can be improved.

Description

Image processing method and device and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of electronic equipment, in particular to an image processing method and device and electronic equipment.
Background
Double exposure is a technique of photography, and can fuse images in two images into one image, thereby exhibiting a cool and diversified visual effect.
Two current ways of generating double-exposure images include mainly the following two:
the first method is as follows: the shot image is imported into image processing software, and a double-exposure image is obtained through manual image trimming by the image processing software, so that the operation is complicated, and the requirement on the image trimming professional degree of a user is high.
The second method comprises the following steps: the double-exposure image is directly shot by adopting the double-exposure image shooting function provided by the existing electronic equipment. After the electronic equipment starts the double exposure image shooting function, continuously shooting two images, and then superposing and synthesizing the two shot images into a double exposure image. Although the method is convenient to operate and has no requirement on the use specialty, the whole frame in the synthesized double-exposure image has the double-exposure effect, the personalized requirements of users cannot be met, and the interestingness is low.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, which can solve the problem in the prior art that all frames in a generated double-exposure image are double-exposure effects.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present application provides an image processing method, where the method includes: displaying a first preview image and a second preview image, wherein the first preview image is displayed on the second preview image in an overlapping manner; receiving a first input of a user; adjusting a transparency of a target region of a target preview image in response to the first input; wherein the target preview image is at least one of the first preview image and the second preview image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, where the apparatus includes: the display module is used for displaying a first preview image and a second preview image, wherein the first preview image is displayed on the second preview image in an overlapping manner; the receiving module is used for receiving a first input of a user; the adjusting module is used for responding to the first input and adjusting the transparency of a target area of the target preview image; wherein the target preview image is at least one of the first preview image and the second preview image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored in the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, a first preview image and a second preview image are displayed; receiving a first input of a user; and responding to the first input, adjusting the transparency of the target area of the target preview image, defining a local area by the user according to the personalized requirement to set the transparency, and enabling only the target area or only other areas except the target area in the adjusted target preview image to have a double exposure effect, so that the personalized requirement of the user can be met, and the interest of image processing is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flow chart illustrating the steps of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a positional relationship between a first preview image and a second preview image;
FIG. 3 is a schematic view of a transparency adjustment interface;
FIG. 4 is a schematic view of an image processing interface;
fig. 5 is a block diagram showing a configuration of an image processing apparatus according to an embodiment of the present application;
fig. 6 is a block diagram showing a configuration of an electronic apparatus according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating a hardware configuration of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, of the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application are capable of operation in sequences other than those illustrated or described herein, and that the terms "first," "second," etc. are generally used in a generic sense and do not limit the number of terms, e.g., a first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/", and generally means that the former and latter related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, a flowchart illustrating steps of an image processing method according to an embodiment of the present application is shown.
The image processing method of the embodiment of the application comprises the following steps:
step 101: the first preview image and the second preview image are displayed.
The image processing method provided by the embodiment of the application is suitable for processing the preview image in the shooting process. In the case where the double exposure function of the camera is on, the image processing method shown in the embodiment of the present application is executed.
The displayed first preview image and the second preview image can both be images collected by a camera in the shooting process; one of the images may be a preview image acquired by the camera, and the other image may be an image stored locally in the electronic device or an image acquired by the server.
One of the first preview image and the second preview image is used as a top layer, namely a background image, and the other one is used as a bottom layer.
The first preview image is displayed on the second preview image in an overlapped mode, the transparency of the first preview image and the transparency of the second preview image can both be smaller than the preset transparency, and the purpose of setting the transparency of the first preview image smaller than the preset transparency is to prevent the image in the second preview image from being displayed through the first preview image. In an optional embodiment, the transparency of the first preview image and the transparency of the second preview image are both set to 0, and the transparency of the first preview image set to 0 may completely block the image in the second preview image. The first preview image and the second preview image may have transparency greater than the preset transparency.
The target preview image is at least one of the first preview image and the second preview image. In the system, a first preview image can be used as a top layer and a second preview image can be used as a bottom layer by default, namely, the first preview image is used as a target preview image; the system can also default to use the second preview image as a top layer, and the first preview image as a bottom layer, that is, the second preview image is used as a target preview image. Of course, without being limited thereto, the user may also manually switch the stacking order of the first preview image and the second preview image through the first input, thereby changing the target preview image. Furthermore, the user can select a target area in the target preview image while the system is triggered to change the target preview image through the first input. For example: the target area can be determined according to the touch position of the first input and the strength of the first input.
For example, the first preview image is used as a top layer, the second preview image is used as a bottom layer, and when the transparency of the first preview image is greater than a first preset value and the transparency of the second preview image is greater than a second preset value, the target image may be the first preview image or the second preview image.
For example, the first preview image serves as a top layer, the second preview image serves as a bottom layer, and when the transparency of the first preview image is smaller than a third preset value and the transparency of the second preview image is smaller than a fourth preset value, the target image may be the first preview image.
An exemplary schematic diagram of a position relationship between a first preview image and a second preview image is shown in fig. 2, where the first preview image covers the second preview image and is spaced from the second preview image by a certain distance, the first preview image 201 is used as a top layer, and the second preview image 202 is used as a bottom layer.
Step 102: a first input of a user is received.
The first input may be used to delineate a target region in a target preview image. The first input may include at least one of: the method comprises the following steps of operation of delineating a target area in a target preview image, operation of smearing the target area in the target preview image, pressing operation in the target preview image and the like, wherein the strength of the pressing operation and the radius of the target area can be in a direct proportion relation. The target area may be of any suitable shape, the size and shape of the target area being determined in dependence on the first input.
The first input may also be used to determine the target area while determining the target preview image.
The first input may also be used to determine the transparency of the target area.
A first preset control is displayed in the image processing interface and is a transparency adjusting inlet. The first preset control can be a sliding bar, and correspondingly the first input can be the transparency adjusted by adjusting the position of a cursor on the sliding bar; the first preset control may also be a preset button, and accordingly the first input may be an operation of manually inputting a target transparency or selecting a transparency from transparency options provided by the system after the user touches the button to trigger the system to display a transparency input box or transparency options.
As shown in the schematic diagram of the transparency adjustment interface in fig. 3, a user may perform a pressing operation on the first preset control in fig. 3 (a), and trigger the system to display a slider, and as shown in fig. 3 (b), the interface schematic diagram in which the slider is displayed may be used for setting the transparency of the target region by adjusting the position of the cursor on the slider.
The transparency is larger than the preset transparency, and the higher the transparency value is, the stronger the perspective effect is.
Step 103: in response to the first input, a transparency of a target region of the target preview image is adjusted.
If the setting input of the transparency of the target area by the user is not received, the transparency of the target area can be determined as the default transparency of the system. The specific value of the default transparency of the system can be flexibly set by those skilled in the art, so as to ensure that the image superimposed on the underlying layer is displayed.
Because the first preview image and the second preview image are overlapped and displayed, after a user demarcates a target area in the target preview image, the other preview image also corresponds to the corresponding target area, and after the transparency of the target area in the target preview image is adjusted during processing, an image in the target area in the other preview image can be shown through the target area in the target preview image, and finally the target image with the target area having a double exposure effect is obtained.
Illustratively, adjusting the transparency of the target area of the target preview image includes: and under the condition that the transparency of the target preview image is greater than the target transparency, turning down the transparency of the target area in the target preview image to the target transparency. And in the case that the transparency of the target preview image is smaller than the target transparency, the transparency of the target area in the target preview image is adjusted to be larger than the target transparency. The target transparency is the transparency of the target area set by the user through the first input, or the transparency of the target area in the system by default.
For example, the first preview image is used as a top layer, the second preview image is used as a bottom layer, and when the transparency of the first preview image is greater than a first preset value and the transparency of the second preview image is greater than a second preset value, the target image may be the first preview image or the second preview image. At this time, the transparency of the target area of the target preview image is adjusted, specifically, the transparency of the target area of the target preview image is adjusted to be small, so that the effect of double exposure is realized on the other parts except the target area.
For example, the first preview image serves as a top layer, the second preview image serves as a bottom layer, and when the transparency of the first preview image is smaller than a third preset value and the transparency of the second preview image is smaller than a fourth preset value, the target image may be the first preview image. At this time, the transparency of the target area of the target preview image is adjusted, specifically, the transparency of the target area of the target preview image is increased, so as to achieve the effect that the target area presents double exposure.
When the target image is generated, the image in the top layer is used as the background of the target image, and part of the image in the bottom layer is used as the image main body in the target image. For example: the first preview image is an A scenery spot image, the second preview image is a character image, and if the target area corresponds to the B area in the A scenery spot image and corresponds to the head area in the character image, the exposure effect of the character head image which is displayed in the B area of the A scenery spot image in a perspective manner is shown in the generated target image.
It should be noted that the image processing method provided in the embodiment of the present application is not limited to processing an image, and the method may be used to process each frame of image in a video, so as to finally obtain a video with a double exposure effect.
According to the image processing method provided by the embodiment of the application, the first preview image and the second preview image are displayed; receiving a first input of a user; the transparency of the target area of the target preview image is adjusted in response to the first input, a user can define a local area to set the transparency according to personalized requirements, and only the target area or only other areas except the target area in the adjusted target preview image show a double exposure effect, so that the personalized requirements of the user can be met, and the interest of image processing is improved.
When the image is processed, in addition to the first preset control shown in step 102, other controls for user interaction may be included in the image processing interface.
An exemplary map image processing interface schematic diagram is shown in fig. 4, an image processing interface includes a first preset control 401, a second preset control 402, and a third preset control 403, where the first preset control 401 is a transparency adjustment entry, the second preset control 402 is a layer position switching entry, and the third preset control 403 is a cancel target area entry, and a user may further interact with an electronic device by performing input on any preset control, and the following exemplary lists several interaction manners:
in an alternative embodiment, where the target preview image is a first preview image, the step of adjusting the transparency of the target region of the target preview image in response to the first input may comprise the sub-steps of:
the first substep: updating display positions of the first preview image and the second preview image in response to the first input;
in this alternative embodiment, the first input is a selected operation of the second preset control 402. And after the display positions of the first preview image and the second preview image are updated, the second preview image is covered on the first preview image.
And a second substep: and receiving a second input, and adjusting the transparency of the target area of the second preview image under the condition that the second preview image is displayed on the first preview image in an overlapped mode.
A first preset control 401 is displayed in the image processing interface, and the first preset control 401 is a transparency adjustment entry. The first preset control can be a slider, and correspondingly, the second input can be the adjustment of the transparency by adjusting the position of a cursor on the slider; the first preset control may also be a preset button, and accordingly, the second input may be an operation of manually inputting a target transparency or selecting a transparency from transparency options provided by the system after the user touches the button to trigger the system to display a transparency input box or a transparency option.
After the target area and the target transparency are determined, the transparency of the target area of the second preview image needs to be adjusted to the target transparency, and a target image is generated.
Through the optional embodiment, the user can switch the display position between the two preview images according to the requirement, and the operation is flexible and the interestingness is strong.
In an optional embodiment, in the case that the target area is determined by the first input, after the step of determining the target area, the following step may be further included:
receiving a third input of a user to a third preset control;
wherein the third input may comprise at least one of: a pressing operation, a clicking operation, a sliding operation, or the like.
In response to a third input, the delineation of the target region is cancelled.
The third preset control can also be called a reset redrawing button, and the target area drawn on the current image processing interface is erased, and the first preview image covered on the upper layer is displayed in a full frame. The user may again perform the first input in the image processing interface, re-demarcating the target area.
According to the optional mode, the user can cancel the planned target area only by executing the third input, and the target area is planned again, so that the operation is convenient and fast.
In an optional embodiment, when the first preview image and the second preview image are acquired, a front-facing camera can be called to acquire the first preview image; and calling a rear camera to acquire a second preview image.
Certainly, the method is not limited to this, and two different front cameras can be called to respectively acquire the first preview image and the second preview image; or respectively calling two different rear cameras to respectively acquire the first preview image and the second preview image.
The mode of simultaneously calling the front camera and the rear camera to acquire two preview images has the advantages of convenient operation; on the other hand, scenes on the front side and the back side of the user can be synthesized into the target image, and the interestingness of shooting is increased.
In an optional embodiment, the first preview image is a preview image acquired by the first camera, an image acquired from a server, or a locally stored image; the second preview image is a preview image acquired by a second camera, an image acquired from a server or a locally stored image; wherein the first camera is different from the second camera.
In the optional implementation manner, under the condition that the preview image acquired by the front camera or the rear camera is not satisfied by the user, the target image can be generated by combining the locally stored image or the image acquired by the server, so that the satisfaction degree of the user on the target image can be improved. Moreover, user interaction with the electronic device may also be enhanced as the user needs to look for the preview image in a locally stored image.
In an alternative embodiment, in the case that the first preview image is a first video frame in a first video and the second preview image is a second video frame in a second video, the step of adjusting the transparency of the target region of the target preview image in response to the first input comprises the sub-steps of:
the first substep: respectively combining video frames in the first video and the second video into a video frame pair in response to the first input;
and a second substep: and adjusting the transparency of the target area of the target video frame in each video frame pair to generate the target video.
By the optional mode, double exposure processing can be performed on the video, and the target video with strong interestingness and cool visual effect is obtained.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. In the embodiment of the present application, an image processing module is taken as an example to execute an image processing method, and an image processing apparatus provided in the embodiment of the present application is described.
Fig. 5 is a block diagram of an image processing apparatus implementing an embodiment of the present application.
The image processing apparatus 500 of the embodiment of the present application includes: a display module 501, configured to display a first preview image and a second preview image, where the first preview image is displayed on the second preview image in an overlapping manner; a receiving module 502, configured to receive a first input of a user; an adjusting module 503, configured to adjust a transparency of a target area of the target preview image in response to the first input; wherein the target preview image is at least one of the first preview image and the second preview image.
Optionally, the target preview image is the first preview image, and the adjusting module includes:
a first sub-module for updating display positions of the first and second preview images in response to the first input;
and the second sub-module is used for receiving a second input and adjusting the transparency of the target area of the second preview image under the condition that the second preview image is superposed and displayed on the first preview image.
Optionally, the adjusting module is specifically configured to:
responding to the first input, and adjusting the transparency of a target area of a target preview image according to the transparency value corresponding to the first input;
and the target preview image is an image at the upper layer of the display position in the first preview image and the second preview image.
Optionally, the first preview image is a preview image acquired by a first camera, an image acquired from a server, or a locally stored image;
the second preview image is a preview image acquired by a second camera, an image acquired from a server or a locally stored image;
the first camera is different from the second camera.
Optionally, the adjusting module includes:
a third sub-module, configured to, in response to the first input, respectively combine video frames in the first video and the second video into a video frame pair when the first preview image is a first video frame in a first video and the second preview image is a second video frame in a second video;
and the fourth sub-module is used for adjusting the transparency of a target area of the target video frame in each video frame pair to generate the target video.
The image processing device provided by the embodiment of the application displays a first preview image and a second preview image; overlaying the first preview image on the second preview image for displaying; receiving a first input of a user; and responding to the first input, adjusting the transparency of the target area of the target preview image, defining a local area by the user according to the personalized requirement to set the transparency, and enabling only the target area or only other areas except the target area in the adjusted target preview image to have a double exposure effect, so that the personalized requirement of the user can be met, and the interest of image processing is improved.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiments of fig. 1 to fig. 4, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 6, an electronic device 600 is further provided in an embodiment of the present application, and includes a processor 601, a memory 602, and a program or an instruction stored in the memory 602 and executable on the processor 601, where the program or the instruction is executed by the processor 601 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing the embodiment of the present application.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, and a processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The display unit 706 is configured to display a first preview image and a second preview image, where the first preview image is displayed on the second preview image in an overlapping manner;
a user input unit 707 for receiving a first input by a user;
the processor 710 is further configured to adjust a target transparency of the target area in response to the first input, wherein the target preview image is at least one of the first preview image and the second preview image.
In the embodiment of the application, the electronic equipment displays a first preview image and a second preview image; receiving a first input of a user; and responding to the first input, adjusting the target transparency of the target area of the target preview image, defining a local area by the user according to the personalized requirement to set the transparency, and enabling only the target area or only other areas except the target area in the adjusted target preview image to have a double exposure effect, so that the personalized requirement of the user can be met, and the interest of image processing is improved.
Optionally, the target preview image is the first preview image, and when the processor 710 adjusts the transparency of the target area of the target preview image in response to the first input, the method is specifically configured to: updating display positions of the first preview image and the second preview image in response to the first input;
the user input unit 707 is further configured to receive a second input, and adjust the transparency of the target region of the second preview image when the second preview image is displayed in a superimposed manner on the first preview image.
Optionally, when the processor 710 adjusts the transparency of the target area of the target preview image in response to the first input, the processor is specifically configured to: responding to the first input, and adjusting the transparency of a target area of a target preview image according to the transparency value corresponding to the first input; and the target preview image is an image at the upper layer of the display position in the first preview image and the second preview image.
Optionally, the first preview image is a preview image acquired by a first camera, an image acquired from a server, or a locally stored image; the second preview image is a preview image acquired by a second camera, an image acquired from a server or a locally stored image; the first camera is different from the second camera.
Optionally, when the first preview image is a first video frame in a first video and the second preview image is a second video frame in a second video, and the processor 710 is configured to, in response to the first input, adjust transparency of a target area of the target preview image, specifically: in response to the first input, respectively composing video frames in the first video and the second video into video frame pairs; and adjusting the transparency of the target area of the target video frame in each video frame pair to generate the target video.
It should be understood that in the embodiment of the present application, the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics Processing Unit 7041 processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts of a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 709 may be used to store software programs as well as various data, including but not limited to applications and operating systems. Processor 710 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and is not described here again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, or a system-on-chip.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, which are intended to be illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the appended claims.

Claims (9)

1. An image processing method, characterized in that the method comprises:
displaying a first preview image and a second preview image, wherein the first preview image is displayed on the second preview image in an overlapping manner or the second preview image is displayed on the first preview image in an overlapping manner;
receiving a first input of a user;
adjusting a transparency of a target region of a target preview image in response to the first input;
wherein the target preview image is at least one of the first preview image and the second preview image;
further comprising:
switching a stacking order of the first preview image and the second preview image in response to the first input;
wherein the first input is used to determine a target area while the target preview image is authentic;
the first input is used for adjusting the transparency of the target area in the target preview image to be smaller than the target transparency under the condition that the transparency of the target preview image is larger than the target transparency, and adjusting the transparency of the target area in the target preview image to be larger than the target transparency under the condition that the transparency of the target preview image is smaller than the target transparency;
the target transparency is a transparency of a target area set by the first input;
in a case where the first preview image is a first video frame in a first video and the second preview image is a second video frame in a second video, the adjusting transparency of the target region of the target preview image in response to the first input includes:
in response to the first input, respectively composing video frames in the first video and the second video into video frame pairs;
and adjusting the transparency of the target area of the target video frame in each video frame pair to generate the target video.
2. The method of claim 1, wherein the target preview image is the first preview image, and wherein adjusting the transparency of the target area of the target preview image in response to the first input comprises:
updating display positions of the first preview image and the second preview image in response to the first input;
and receiving a second input, and under the condition that the second preview image is displayed on the first preview image in an overlapping manner, adjusting the transparency of a target area of the second preview image.
3. The method of claim 1, wherein the step of adjusting the transparency of the target region of the target preview image in response to the first input comprises:
responding to the first input, and adjusting the transparency of a target area of a target preview image according to the transparency value corresponding to the first input;
and the target preview image is an image displayed at the upper layer in the first preview image and the second preview image.
4. The method of claim 1, wherein:
the first preview image is a preview image acquired by a first camera, an image acquired from a server or a locally stored image;
the second preview image is a preview image acquired by a second camera, an image acquired from a server or a locally stored image;
the first camera is different from the second camera.
5. An image processing apparatus, characterized in that the apparatus comprises:
the display module is used for displaying a first preview image and a second preview image, wherein the first preview image is displayed on the second preview image in an overlapping mode or the second preview image is displayed on the first preview image in an overlapping mode;
the receiving module is used for receiving a first input of a user;
an adjustment module to adjust a transparency of a target region of a target preview image in response to the first input;
wherein the target preview image is at least one of the first preview image and the second preview image;
the adjustment module is further configured to: switching a stacking order of the first preview image and the second preview image in response to the first input;
wherein the first input is used to determine a target area while the target preview image is being determined;
the first input is used for adjusting the transparency of the target area in the target preview image to be smaller than the target transparency under the condition that the transparency of the target preview image is larger than the target transparency, and adjusting the transparency of the target area in the target preview image to be larger than the target transparency under the condition that the transparency of the target preview image is smaller than the target transparency; the target transparency is a transparency of a target area set by the first input;
the adjustment module includes:
a third sub-module, configured to, in response to the first input, respectively combine video frames in the first video and the second video into a video frame pair when the first preview image is a first video frame in a first video and the second preview image is a second video frame in a second video;
and the fourth sub-module is used for adjusting the transparency of a target area of the target video frame in each video frame pair to generate the target video.
6. The apparatus of claim 5, wherein the target preview image is the first preview image, and wherein the adjustment module comprises:
a first sub-module for updating the display positions of the first and second preview images in response to the first input;
and the second sub-module is used for receiving a second input and adjusting the transparency of the target area of the second preview image under the condition that the second preview image is superposed and displayed on the first preview image.
7. The apparatus of claim 5, wherein the adjustment module is specifically configured to:
responding to the first input, and adjusting the transparency of a target area of a target preview image according to the transparency value corresponding to the first input;
and the target preview image is an image at the upper layer of the display position in the first preview image and the second preview image.
8. The apparatus of claim 5, wherein:
the first preview image is a preview image acquired by a first camera, an image acquired from a server or a locally stored image;
the second preview image is a preview image acquired by a second camera, an image acquired from a server or a locally stored image;
the first camera is different from the second camera.
9. An electronic device, comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which program or instructions, when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 4.
CN202011015173.2A 2020-09-24 2020-09-24 Image processing method and device and electronic equipment Active CN112135049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011015173.2A CN112135049B (en) 2020-09-24 2020-09-24 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011015173.2A CN112135049B (en) 2020-09-24 2020-09-24 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112135049A CN112135049A (en) 2020-12-25
CN112135049B true CN112135049B (en) 2022-12-06

Family

ID=73839606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011015173.2A Active CN112135049B (en) 2020-09-24 2020-09-24 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112135049B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112995500B (en) * 2020-12-30 2023-08-08 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and medium
CN112887622B (en) * 2021-01-28 2022-12-06 维沃移动通信有限公司 Shooting method, shooting device, shooting equipment and storage medium
US20230377306A1 (en) * 2021-06-16 2023-11-23 Honor Device Co., Ltd. Video Shooting Method and Electronic Device
CN115484390B (en) * 2021-06-16 2023-12-19 荣耀终端有限公司 Video shooting method and electronic equipment
CN113961113A (en) * 2021-10-28 2022-01-21 维沃移动通信有限公司 Image processing method and device, electronic equipment and readable storage medium
CN114286002A (en) * 2021-12-28 2022-04-05 维沃移动通信有限公司 Image processing circuit, method and device, electronic equipment and chip
CN114520875B (en) * 2022-01-28 2024-04-02 西安维沃软件技术有限公司 Video processing method and device and electronic equipment
CN114598823A (en) * 2022-03-11 2022-06-07 北京字跳网络技术有限公司 Special effect video generation method and device, electronic equipment and storage medium
CN117676314A (en) * 2024-01-29 2024-03-08 荣耀终端有限公司 Photographing method, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110971832A (en) * 2019-12-20 2020-04-07 维沃移动通信有限公司 Image shooting method and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102477522B1 (en) * 2015-09-09 2022-12-15 삼성전자 주식회사 Electronic device and method for adjusting exposure of camera of the same
CN105163041A (en) * 2015-10-08 2015-12-16 广东欧珀移动通信有限公司 Realization method and apparatus for local double exposure, and mobile terminal
CN105208288A (en) * 2015-10-21 2015-12-30 维沃移动通信有限公司 Photo taking method and mobile terminal
CN111107267A (en) * 2019-12-30 2020-05-05 广州华多网络科技有限公司 Image processing method, device, equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110971832A (en) * 2019-12-20 2020-04-07 维沃移动通信有限公司 Image shooting method and electronic equipment

Also Published As

Publication number Publication date
CN112135049A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN112135049B (en) Image processing method and device and electronic equipment
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN111654635A (en) Shooting parameter adjusting method and device and electronic equipment
CN112492212B (en) Photographing method and device, electronic equipment and storage medium
CN111669507A (en) Photographing method and device and electronic equipment
CN111756995A (en) Image processing method and device
CN113794829B (en) Shooting method and device and electronic equipment
CN111857512A (en) Image editing method and device and electronic equipment
CN112738402B (en) Shooting method, shooting device, electronic equipment and medium
CN111669506A (en) Photographing method and device and electronic equipment
CN113329172B (en) Shooting method and device and electronic equipment
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112954209B (en) Photographing method and device, electronic equipment and medium
CN112702531B (en) Shooting method and device and electronic equipment
CN114025092A (en) Shooting control display method and device, electronic equipment and medium
CN114143461B (en) Shooting method and device and electronic equipment
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN112333395B (en) Focusing control method and device and electronic equipment
CN112333389B (en) Image display control method and device and electronic equipment
CN113286085B (en) Display control method and device and electronic equipment
CN112584040B (en) Image display method and device and electronic equipment
CN112312021B (en) Shooting parameter adjusting method and device
CN113961113A (en) Image processing method and device, electronic equipment and readable storage medium
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium
CN114339073B (en) Video generation method and video generation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant