CN111083374A - Filter adding method and electronic equipment - Google Patents

Filter adding method and electronic equipment Download PDF

Info

Publication number
CN111083374A
CN111083374A CN201911379549.5A CN201911379549A CN111083374A CN 111083374 A CN111083374 A CN 111083374A CN 201911379549 A CN201911379549 A CN 201911379549A CN 111083374 A CN111083374 A CN 111083374A
Authority
CN
China
Prior art keywords
image
area
input
target
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911379549.5A
Other languages
Chinese (zh)
Other versions
CN111083374B (en
Inventor
吴禹辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911379549.5A priority Critical patent/CN111083374B/en
Publication of CN111083374A publication Critical patent/CN111083374A/en
Application granted granted Critical
Publication of CN111083374B publication Critical patent/CN111083374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the invention provides a filter adding method and electronic equipment, relates to the technical field of communication, and aims to solve the problem that the existing filter adding process is complex. The specific scheme comprises the following steps: receiving a first input to a first area in a screen; responding to the first input, and displaying a target filter image in a second area, wherein the second area is an area corresponding to the camera under the screen in the screen, and is different from the first area; acquiring light rays passing through the second area and light rays emitted by the second area through the under-screen camera to obtain a target image, and displaying the target image in the first area; the first input is input for selecting a target filter mode in the first area, and the target filter image is a filter image corresponding to the target filter mode; alternatively, the first input is an input for rendering a filter image in the first region, and the target filter image is an image obtained by scaling down the rendered filter image. The scheme is applied to a scene of adding the filter in the electronic equipment.

Description

Filter adding method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a filter adding method and electronic equipment.
Background
With the development of mobile communication technology, taking pictures using a photographing function of an electronic device has become an indispensable part of people's daily life. In general, in order to enrich the display form of the picture, people can add a filter to the picture when post-processing the picture, so as to display different visual effects.
However, in the existing filter adding method, color mapping is performed on a picture through a software algorithm, that is, after the picture is obtained, a change rule of a color effect is analyzed through the software algorithm, and then the analyzed change rule is applied to the picture to which the filter needs to be added, so as to realize the effect of simulating the color of the camera filter, and thus the filter adding process is relatively complicated.
Disclosure of Invention
The embodiment of the invention provides a filter adding method and electronic equipment, and aims to solve the problem that the existing filter adding process is complex.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a filter adding method. The method is applied to the electronic equipment comprising the screen camera, and the method can comprise the following steps: receiving a first input to a first area in a screen; responding to the first input, and displaying a target filter image in a second area, wherein the second area is an area corresponding to the camera under the screen in the screen; and acquiring light rays passing through the second area and light rays emitted by the second area through the camera under the screen to obtain a target image, and displaying the target image in the first area, wherein the second area is an area different from the first area. The first input is input for selecting a target filter mode in the first area, and the target filter image is a filter image corresponding to the target filter mode; alternatively, the first input is an input for rendering a filter image in the first region, and the target filter image is an image obtained by scaling down the rendered filter image.
In a second aspect, an embodiment of the present invention provides an electronic device. The electronic equipment comprises a screen lower camera. The electronic equipment comprises a receiving module, a display module and a processing module. The receiving module is used for receiving a first input of a first area in a screen; the display module is used for responding to the first input received by the receiving module and displaying the target filter image in a second area, and the second area is an area corresponding to the off-screen camera in the screen; the processing module is used for acquiring light rays passing through the second area and light rays emitted by the second area through the under-screen camera to obtain a target image; and the display module is also used for displaying the target image obtained by the processing module in a first area, and the second area is an area different from the first area. The first input is input for selecting a target filter mode in the first area, and the target filter image is a filter image corresponding to the target filter mode; alternatively, the first input is an input for rendering a filter image in the first region, and the target filter image is an image obtained by scaling down the rendered filter image.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the electronic device implements the steps of the filter adding method in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the filter adding method as in the first aspect described above.
In the embodiment of the invention, the electronic equipment can receive a first input of a user to a first area in a screen, respond to the first input, display a target filter image in a second area corresponding to a lower screen camera in the screen, then collect light rays penetrating through the second area and light rays emitted by the second area through the lower screen camera to obtain a target image, and display the target image in the first area; the first input is input for selecting a target filter mode in the first area, and the target filter image is a filter image corresponding to the target filter mode; alternatively, the first input is an input for rendering a filter image in the first region, and the target filter image is an image obtained by scaling down the rendered filter image. Through the scheme, the target filter image can be displayed in the screen by the electronic equipment in the second area corresponding to the screen lower camera, so that external light penetrating through the second area can be collected by the screen lower camera, light emitted by the first area when the target filter image is displayed in the second area can also be collected, the target image can be generated by the screen lower camera according to the collected two lights, and the target image with the filter effect is displayed in the first area of the electronic equipment. In this way, the filter addition process is simplified.
Drawings
Fig. 1 is a schematic diagram of an architecture of an android operating system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a filter adding method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an interface for selecting a target filter according to an embodiment of the present invention;
FIG. 4 is a second schematic interface diagram illustrating a method for selecting a target filter according to an embodiment of the present invention;
FIG. 5 is a third schematic interface diagram illustrating a method for selecting a target filter according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an interface for drawing an image of a target filter according to an embodiment of the present invention;
FIG. 7 is a second schematic diagram illustrating a filter adding method according to an embodiment of the present invention;
FIG. 8 is a third schematic diagram illustrating a filter adding method according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 10 is a hardware schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the described embodiments without making any inventive step, fall within the scope of protection of the present application.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
The terms "first" and "second," etc. herein are used to distinguish between different objects and are not used to describe a particular order of objects. For example, the first input and the second input, etc. are for distinguishing different inputs, rather than for describing a particular order of inputs.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The embodiment of the invention provides a filter adding method and electronic equipment, wherein the electronic equipment can receive first input of a user to a first area in a screen, respond to the first input, display a target filter image in a second area corresponding to an off-screen camera in the screen, then collect light rays penetrating through the second area and light rays emitted by the second area through the off-screen camera to obtain a target image, and display the target image in the first area; the first input is input for selecting a target filter mode in the first area, and the target filter image is a filter image corresponding to the target filter mode; alternatively, the first input is an input for rendering a filter image in the first region, and the target filter image is an image obtained by scaling down the rendered filter image. Through the scheme, the target filter image can be displayed in the screen by the electronic equipment in the second area corresponding to the screen lower camera, so that external light penetrating through the second area can be collected by the screen lower camera, light emitted by the first area when the target filter image is displayed in the second area can also be collected, the target image can be generated by the screen lower camera according to the collected two lights, and the target image with the filter effect is displayed in the first area of the electronic equipment. In this way, the filter addition process is simplified.
The electronic device in the embodiment of the present invention may be an electronic device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment applied to the filter adding method provided by the embodiment of the present invention, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system. For example, in the embodiment of the present invention, the electronic device may specifically capture the target image through some capture applications.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
Generally, an application program may include two parts, one part refers to content displayed on a screen of an electronic device, for example, the electronic device displays a target image in a first area; another part refers to a service (service) running in the background of the electronic device, which is used to detect the input of the user for the application program, and in response to the input, perform corresponding action, for example, if a first input of the user to a first area in the screen is detected, the target filter image may be displayed in a second area in response to the first input.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the filter adding method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the filter adding method may operate based on the android operating system shown in fig. 1. Namely, the processor or the electronic device can implement the filter adding method provided by the embodiment of the invention by running the software program in the android operating system.
The electronic device in the embodiment of the invention can be a terminal device. The terminal device may be a mobile terminal device or a non-mobile terminal device. For example, the mobile terminal device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile terminal device may be a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiment of the present invention is not particularly limited.
The execution main body of the filter adding method provided by the embodiment of the present invention may be the electronic device, or may also be a functional module and/or a functional entity capable of implementing the filter adding method in the electronic device, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited. The following takes an electronic device as an example to exemplarily explain a filter adding method provided by the embodiment of the present invention.
In the embodiment of the present invention, after the user triggers the electronic device to run the system camera application program or the shooting application program, the electronic device may start the off-screen camera, for example, if the electronic device is in a self-shooting mode, the electronic device may start the off-screen camera located below the screen of the electronic device. The electronic device may default to not turn on the filter function when starting the system camera application or shooting the application, i.e., the filter image may not be displayed in the screen area above the lower screen camera corresponding to the lower screen camera.
One possible implementation scenario is that, when the electronic device displays a shooting interface, if a user wants to make a shot image exhibit different effects, the user can make the electronic device display a filter image corresponding to a filter mode in a screen area above the off-screen camera and corresponding to the off-screen camera through selection input of the filter mode, so that the shot image exhibits different display effects.
Another possible implementation scenario is that, when the electronic device displays a shooting interface, if a user wants to make a shot image exhibit different effects, the user may draw a filter image in a filter drawing interface of the electronic device, so that the electronic device can display the filter image in a screen area above the off-screen camera and corresponding to the off-screen camera, and the shot image exhibits different display effects.
It should be noted that, in the embodiment of the present invention, the screen of the electronic device may include at least a first area and a second area, where the second area is a display area located above and corresponding to the off-screen camera in the screen of the electronic device, the first area is another display area in the screen of the electronic device except the second area, and the size of the first area is larger than that of the second area, for example, the size of the first area may be tens of times or hundreds of times larger than that of the second area.
The following describes an exemplary filter adding method and an electronic device according to an embodiment of the present invention with reference to the accompanying drawings.
As shown in fig. 2, an embodiment of the present invention provides a filter adding method, which may be applied to an electronic device including an off-screen camera. The method may include steps 201 through 203 described below.
Step 201, the electronic device receives a first input to a first area in a screen.
In the embodiment of the present invention, the first input may be used to trigger the electronic device to display the filter image in the second area, where the second area is an area corresponding to the off-screen camera in the screen. Specifically, the first input may be an input of a manner in which a user selects a target filter in the first area, an input of a manner in which the user draws a filter image, or an input of a manner in which the user selects a target filter image.
Optionally, if the first input is an input that the user selects the target filter mode in the first area, when the electronic device receives the first input to the first area in the screen, the interface displayed by the electronic device may be a shooting preview interface, where the shooting preview interface may include at least one filter mode, and the at least one filter mode includes the target filter mode.
For example, in a case where the electronic device displays a shooting preview interface, if a user wants to add a filter effect to a preview image in the shooting preview interface, the user may make a first input to a target filter manner in at least one filter manner in the shooting preview interface to trigger the electronic device to display the target filter image in the second area in response to the first input.
Optionally, if the first input is an input of a user drawing a filter image, when the electronic device receives the first input of the user to the first area in the screen, an interface displayed by the electronic device may be a drawing filter interface, where the drawing filter interface may include drawing controls such as a brush, a shape, and a color.
For example, in a case where the electronic device displays a shooting preview interface, if the user wants to add a filter effect to a preview image in the shooting preview interface, the user may trigger the electronic device to display a drawing filter interface. Thereafter, the user may make a first input to render the filter image through a rendering control in the rendering filter interface to trigger the electronic device to display the target filter image in the second region in response to the first input.
Optionally, if the first input is a selection input of the target filter image by the user, when the electronic device receives the first input to the first area in the screen, the interface displayed by the electronic device may be an image selection interface, where the image selection interface may include at least one image, and the at least one image includes the target filter image.
For example, in a case where the electronic device displays a shooting preview interface, if the user wants to add a filter effect to a preview image in the shooting preview interface, the user may trigger the electronic device to display an image selection interface, and then the user may make a first input on a target filter image in the image selection interface to trigger the electronic device to display the target filter image in the second area in response to the first input.
It should be noted that, before the electronic device receives the first input to the first area in the screen, the second area may not display any content, or the second area and the first area may be combined into a whole to display a complete interface or image.
Step 202, the electronic device displays the target filter image in the second area in response to the first input.
The second area can be an area corresponding to the camera under the screen in the screen, and the second area is different from the first area.
In an embodiment of the present invention, the first input may be an input for selecting a target filter mode in the first area, and the target filter image is a filter image corresponding to the target filter mode. Alternatively, the first input may be an input for rendering a filter image in the first region, and the target filter image may be an image obtained by scaling down the rendered filter image.
In the embodiment of the present invention, the second area may be an area determined according to a distance between the off-screen camera and the screen and a field angle of the off-screen camera, and the shape of the second area may be any possible shape such as a circle, a square, a rounded square, and an ellipse. As shown in fig. 3 (a), step 202 will be described in the embodiment of the present invention, taking the second region 31 as a circle as an example.
Optionally, the target filter image displayed in the second area 31 may be a preset filter image; alternatively, it may be one filter image randomly selected from a plurality of filter images; alternatively, it may be a filter image or the like displayed in the second area 31 last time in the history.
Different target filter images may correspond due to different first inputs. Therefore, the embodiment of the present invention will be described with reference to the following 3 scenes as an example.
Scene 1: when the first input is an input for selecting a target filter mode in the first area and the target filter mode is a preset filter mode, the target filter image corresponding to the target filter mode may be a filter image corresponding to the preset filter mode.
It should be noted that, for the preset filter mode, the filter image corresponding to the preset filter mode is a designated image, and the user is not allowed to edit the filter image, so that after the user selects one preset filter mode, the electronic device directly displays the filter image corresponding to the preset filter mode in the second area 31.
As shown in fig. 3 (a), upon a user triggering the electronic device to run a system camera application or a capture application, the electronic device may display a capture interface that includes a filter control 32. If the user wants to add a filter effect to the preview image in the shooting preview interface, the user can perform a click input on the filter control 32, so that the electronic device can display a plurality of controls 33 as shown in (b) of fig. 3 in response to the click input, and each of the plurality of controls 33 can correspond to one filter mode respectively. Each control can include a control identifier, and the control identifier can be a character or a thumbnail of a filter image corresponding to the filter mode. The multiple filter modes corresponding to the multiple controls 33 may be filter modes pre-stored in the electronic device, for example, common filter modes may include cold color, warm color, black and white, bright color, and the like.
The user may make a first input to any of the plurality of controls 33, which may be used to determine a filter style. If the user makes a first input to a first control of the plurality of controls 33 and the first control corresponds to the target filter mode, after the electronic device receives the first input, the electronic device may display a target filter image in the second area 31 located above the under-screen camera in response to the first input.
Optionally, after the electronic device displays the target filter image in the second area, the user may also input another control of the multiple controls 33 except the first control, so as to switch the filter image displayed in the second area 31. For example, as shown in fig. 3 (b), the user may click on the control identified with filter 1, thereby triggering the electronic device to switch the target filter image displayed in the second area 31 to filter 1, and the user may click on the control identified with filter 2, thereby triggering the electronic device to switch filter 1 displayed in the second area 31 to filter 2.
Scene 2: in a case where the first input is an input for selecting a target filter manner and the target filter manner is an image filter manner, the target filter image corresponding to the target filter manner may be one image selected by a user from at least one image corresponding to the image filter manner.
In the image filter method, the target filter image is not a single designated image. Specifically, after receiving the first input, the user needs to select an image from at least one image corresponding to the image filter mode, and the user can select whether to perform an editing operation on the image according to the requirement. The user may then trigger the electronic device to display the image in the second area.
Optionally, step 202 may specifically include: the electronic equipment responds to a first input and displays at least one image in an image selection interface in the first area; thereafter, the electronic device may receive a second input from the user for the first image of the at least one image and display the target filter image in the second area in response to the second input. The target filter image may be an image obtained by scaling down the first image, or an image obtained by performing target editing operation on the first image and scaling down the first image.
Optionally, the target editing operation may include an editing operation on items such as image transparency, image shape, image display special effect, and the like.
Specifically, as shown in fig. 4 (a), the target filter system may be an image filter system. After a first input is made by the user to the second control, the electronic device may display at least one image, i.e., image 1, image 2, image 3, image 4, image 5, etc., in the image selection interface 41 in the first region in response to the first input. The at least one image may be an image stored in an album of the electronic device, an image shot by the user immediately, or an image acquired by the electronic device from a server. For example, if the user considers that the displayed image stored in the electronic device album is not satisfactory, the user may click on the first photographing control 42. As shown in (b) of fig. 4, the electronic device may receive the click input and display a photographing interface in response to the click input, the user may make a click input to the second photographing control 43 in the photographing interface, and the electronic device may receive the click input and photograph an image in response to the click input. The electronic apparatus can display the captured image in the image selection interface 41 according to an operation instruction of the user. Thereafter, the user may make a second input on a first image of the at least one image displayed in the image selection interface 41.
After the electronic device receives the second input of the user to the first image in the at least one image, the electronic device may perform any one of the following two implementation manners:
implementation mode 1: the electronic device may display the first image in a capture preview interface in a first area of the electronic device in response to the second input.
In a case where the first image is displayed in the photographing preview interface, the second input may include an input of selecting the first image from among the at least one image and an input of triggering the electronic device to display the first image in the photographing preview interface.
Specifically, the electronic device may display at least one image in the image selection interface 41 in a thumbnail manner, and then the user may perform one input on the first image, as shown in fig. 5 (a), the electronic device may receive and respond to the one input, cancel displaying the at least one image, and display a first image enlarged according to the first scale corresponding to the first image, and if the user considers that the first image is satisfactory after previewing the enlarged first image, perform another input, for example, the user may perform a click input on the application control 51, and the electronic device may receive and respond to the click input, and display a target filter image corresponding to the first image in the second area.
Implementation mode 2: the electronic device may display, in response to the second input, an image after the target editing operation is performed on the first image in a shooting preview interface in the first area of the electronic device.
In the case where an image obtained by performing the object editing operation on the first image is displayed in the shooting preview interface, the second input may include: the method includes selecting an input of a first image from at least one image, triggering the electronic device to perform a target editing operation on the first image, and triggering the electronic device to display the first image in a capture preview interface.
Specifically, after the user inputs the first image, the electronic device is enabled to display the scaled-up first image corresponding to the first image, and if the user determines that the first image does not conform to the mind after previewing the scaled-up first image, the electronic device may be triggered to execute the target editing operation on the first image, and after the electronic device completes the target editing operation, the user may trigger the electronic device to display the target filter image corresponding to the edited first image in the second area.
Illustratively, with continued reference to (a) of fig. 5, the user may make a click input to the editing control 52, as shown in (b) of fig. 5, in response to which the electronic device may display an editing interface for the first image, which may include a plurality of editing sub-controls 53, which editing sub-controls 53 may correspond to transparency, cropping, special effects, or other possible editing items. Thereafter, the user may input the edit sub-control 53 so that the electronic device may perform a target edit operation on the first image in response to the input, for example, the electronic device may adjust the image transparency of the first image to 20%, cut the image shape of the first image to a circular shape, and add an image display effect of the image kaleidoscope on the first image. After the electronic device completes the target editing operation, the user may make a click input to the application control 54, and the electronic device may receive and respond to the click input to display the target filter image corresponding to the edited first image in the second area.
Scene 3: in the case where the first input is an input for drawing a filter image in the first region, the target filter image may be an image in which the filter image drawn by the user is scaled down.
Optionally, in a case where the first input is an input for drawing a filter image in the first area, before the "displaying the target filter image in the second area" in step 202, the method may further include: the rendered filter image is displayed in the first area.
After a user inputs a filter control in a shooting interface, the electronic device may display a drawing filter control in response to the input, and after the user inputs an input to the drawing filter control, as shown in fig. 6 (a), the electronic device may display a drawing filter interface in a first region in response to the input, where the drawing filter interface may include drawing controls such as a painting brush, a shape, and a color, and the user may perform a first input of drawing a filter image through the drawing control in the drawing filter interface, and the electronic device may display a filter image 61 drawn by the user in the drawing filter interface in response to the first input. After the user has finished rendering, an input may be made to the application control 62, as shown in fig. 6 (b), and the electronic device may receive and in response to the input, display the rendered image of the filter image 61 scaled down in the second area, i.e., the target filter image 63.
And 203, the electronic equipment acquires the light passing through the second area and the light emitted by the second area through the camera under the screen to obtain a target image, and displays the target image in the first area.
The electronic equipment can adopt the camera under the screen to shoot the target object, and the camera not only can gather the natural light that passes through the second region under this screen, can also gather the light of the regional transmission of second. Specifically, in the process of displaying the target filter image in the second area, a part of the light corresponding to the target filter image may be emitted to the outside of the electronic device, another part of the light enters the inside of the electronic device, and the light entering the inside of the electronic device may be collected by a photosensitive chip (e.g., a CMOS chip) of the off-screen camera. Then, an image sensor in the electronic device can convert light collected by the camera under the screen into an electric signal, and then the electric signal is converted into a digital signal through an internal analog-to-digital converter, so that a target image is obtained. In this manner, the electronic device may display the target image in the interface of the first region.
Optionally, the target image may be a picture or a video, that is, after the electronic device displays the target filter image in the second area, the user may shoot an image through the off-screen camera, or record a video through the off-screen camera.
The embodiment of the invention provides a filter adding method, and as electronic equipment can display a target filter image in a second area corresponding to an off-screen camera in a screen, the off-screen camera can collect external light penetrating through the second area and light emitted by a first area when the target filter image is displayed in the second area, so that the off-screen camera can generate a target image according to the two collected lights and display the target image with a filter effect in the first area of the electronic equipment. In this way, the filter addition process is simplified.
Optionally, the target filter image may include a plurality of images, and when the user wants to record a video using the electronic device, if the user wants to make the captured video exhibit different filter effects, the user may trigger the second area of the control screen of the electronic device to sequentially display each image in the plurality of images, so that the electronic device may record video frames exhibiting different effects.
Illustratively, referring to fig. 2, as shown in fig. 7, the step 202 may be specifically implemented by the following step 202a, and after the step 203, the filter adding method provided in the embodiment of the present invention may further include the following steps 204 to 205.
Step 202a, the electronic device sequentially displays each image of the plurality of images in the second area according to a preset period.
Optionally, the multiple pictures may be multiple pictures in a moving picture, or may be multiple video frames in a video.
Optionally, the preset period may be a default period of the system, or a period set by a user.
Taking a preset period as an example of the period of the setting input, after the electronic device determines the target filter image and before the target filter image is displayed in the second region, the electronic device may display a floating frame in which the display period of the target filter image is set, the user may input a time in the floating frame and perform a confirmation input, and the electronic device may set the time input by the user as the preset period in response to the confirmation input.
Optionally, in the scene 1, the target filter image corresponding to the target filter mode may be a designated image corresponding to a preset filter mode, and the designated image may be multiple pictures or videos. After the user selects the target filter mode, the electronic device may sequentially display each of the plurality of pictures in the second area or sequentially display each of the video frames in the video in the second area according to a preset period.
Optionally, in the above scene 2, the first image may be a video, and the filter image corresponding to the target filter mode may be the video, or a filter image generated after performing a target editing operation on the video, and after receiving an input applying the video, the electronic device may respond to the input and sequentially display each frame image in the video in the second area according to a preset period.
Optionally, in the scene 3, the electronic device may acquire a plurality of images drawn by the user, for example, the electronic device may acquire the plurality of images drawn by the user in the drawing filter interface, or acquire the images in the drawing filter interface according to a preset interval in the process of drawing the images by the user, so as to obtain the plurality of images. After the electronic device obtains the plurality of images, if an input that the user confirms that the electronic device displays the plurality of images in the second area is received, the electronic device may sequentially display each image in the plurality of images in the second area according to a preset cycle.
Optionally, the electronic device may sequentially display each of the plurality of images in the second area at fixed time intervals according to a preset period, or may sequentially display each of the plurality of images in the second area at non-fixed time intervals according to the preset period.
Illustratively, a plurality of images are taken as 3 images drawn by the user, and the preset period is 3 seconds. In one possible implementation, after the electronic device acquires the 3 images, the electronic device may sequentially display each of the 3 images in the second area at fixed time intervals of 1 second according to a period of 3 seconds, that is, the electronic device may display the 1 st image of the 3 images at the 1 st second, display the 2 nd image of the 3 images at the 2 nd second, and display the 3 rd image of the 3 images at the 3 rd second. In another possible implementation manner, after the electronic device acquires the 3 images, the electronic device may receive a setting input of a user for a display time of each image, and then the electronic device may sequentially display each image in the 3 images within 3 seconds according to the setting of the user, for example, the electronic device may display a 1 st image in the 3 images at a 1 st second, display a 2 nd image in the 3 images at a 1.5 th second, and display a 3 rd image in the 3 images at a 2.5 th second.
And 204, after the target image is displayed in the first area, the electronic equipment receives a third input of the target control.
Optionally, the target control may be a shooting control or other possible controls.
For example, prior to capturing an image, the electronic device may display a preview image captured by an off-screen camera in a capture preview interface, which may be used to calibrate the viewing range for the user. When the target image is the preview image, after the target image is displayed in the first area, if the user thinks that the target image meets the mind, the third input can be performed on the target control, so that the electronic device can receive the third input of the user on the target control, and the electronic device can be triggered to shoot the image.
Optionally, the third input may be a touch input, a voice input, a gesture input, or the like. For example, the touch input may be a click input or a long-press input of a user on a shooting control displayed by the electronic device. The capture control can be used to trigger the electronic device to capture an image.
It should be noted that, after receiving the third input to the target control, the electronic device may continue to sequentially display each of the plurality of images in the second area according to the preset period.
And step 205, the electronic device responds to the third input, and acquires the light passing through the second area and the light emitted by the second area through the off-screen camera to obtain the first video.
And the electronic equipment can respond to a third input by acquiring light rays passing through the second area and light rays emitted by the second area through the camera under the screen to obtain a first video under the condition that the electronic equipment sequentially displays each image in the plurality of images in the second area according to a preset period. The video frame of the first video and the target image are different images, but the content of the images in the video frame of the first video and the target image may be the same.
It should be noted that the first video may be a video including multiple filter images, for example, if the first video includes 3 video frames, the multiple images sequentially displayed in the second region include filter image 1, filter image 2, and filter image 3, and a time interval between adjacent video frames is the same as a time interval for switching the filter images in the second region, the 1 st video frame may include filter image 1, the 2 nd video frame may include filter image 2, and the 3 rd video frame may include filter image 3.
In the embodiment of the invention, under the condition that the target filter image comprises a plurality of images, the electronic equipment can display each image in the plurality of images in the second area in sequence, so that the first video shot by the electronic equipment by adopting the under-screen camera can comprise a plurality of filter images, and the richness of the video display effect is increased.
Optionally, in a case where the target filter image includes a plurality of images, the electronic device may control the second area of the screen to sequentially display each of the plurality of images, and may also perform shooting in a case where the first area displays the target image.
Illustratively, referring to fig. 2 and fig. 8, in the filter adding method according to the embodiment of the present invention, the step 202 may be specifically realized by a step 202b described below, and the step 203 may be specifically realized by a step 203a described below.
Step 202b, the electronic device sequentially displays each image of the plurality of images in the second area according to a preset period.
In the case where the electronic device is capturing a video, the user may trigger the electronic device to add a filter effect to the video being captured via the first input.
For example, the user may add the drawn filter image to the picture being taken. Specifically, in the case where the user triggers the electronic device to start shooting a video through an input, a filter-adding control may be displayed in the screen of the electronic device. After the user clicks the filter-added control, the electronic device can superimpose and display a drawing filter interface on the shooting interface under the condition of executing shooting operation, the user can draw a filter image through the drawing control in the drawing filter interface, the electronic device can obtain a plurality of images, and the images are reduced according to the proportion, so that a target filter image is obtained. Then, the electronic device may not only display the filter image drawn by the user in the first area, but also sequentially display each of the reduced images in the second area according to a preset period.
Optionally, the user may draw the filter image at any time in the process of shooting the video by the electronic device, so that the electronic device adds the filter effect to the video at the corresponding time; and then, the user can select to trigger the electronic equipment to sequentially display the drawn filter images in the second area according to a preset period, and can also select to draw a new filter image, so that the electronic equipment can display the newly drawn filter image in the second area.
For example, the user may draw a filter image with a preset period of 3 seconds at the 5 th second of the video recorded by the electronic device, so that the video frames recorded by the electronic device at the 5 th to 7 th seconds are video frames with the filter effect added. And then, if the electronic equipment sequentially displays the filter images with the preset period of 3 seconds in the second area according to the preset period, the filter effect of the video frames recorded by the electronic equipment from 8 th to 10 th seconds is the same as the filter effect of the video frames recorded from 5 th to 7 th seconds. If the user selects to draw a new filter image with a preset period of 3 seconds at the 8 th second of the video recorded by the electronic device, the filter effect of the video frames recorded by the electronic device at the 8 th to 10 th seconds is different from the filter effect of the video frames recorded at the 5 th to 7 th seconds.
It should be noted that, in the specific implementation manner that the electronic device sequentially displays each of the multiple images in the second area according to the preset period, reference may be made to the description that the electronic device sequentially displays each of the multiple images in the second area according to the preset period in step 202b, and details are not repeated here.
And 203a, the electronic equipment acquires the light passing through the second area and the light emitted by the second area through the off-screen camera to obtain a plurality of video frames, displays the plurality of video frames, and synthesizes the plurality of video frames to obtain a second video.
Optionally, the electronic device may synthesize a plurality of video frames into the second video after obtaining the plurality of video frames, or may synthesize a new video with a previously obtained video frame immediately after obtaining each video frame until obtaining the second video.
In the embodiment of the invention, because the electronic equipment can add the filter effect to the picture being shot in the process of shooting the video, a user can select the time for adding the filter effect in the video, thereby increasing the flexibility of adding the filter to the electronic equipment.
In the embodiment of the present invention, the filter adding method shown in the above-mentioned method drawings is exemplarily described with reference to one drawing in the embodiment of the present invention. In specific implementation, the filter adding method shown in each method figure can also be implemented by combining any other combinable figure shown in the above embodiments, and details are not described here.
As shown in fig. 9, an embodiment of the invention provides an electronic device 900. The electronic device 900 may include an off-screen camera. The electronic device 900 may include a receiving module 901, a display module 902, and a processing module 903. The receiving module 901 may be configured to receive a first input to a first area in a screen. The display module 902 may be configured to display the target filter image in a second area in response to the first input received by the receiving module 901, where the second area is an area of the screen corresponding to the off-screen camera and is different from the first area. The processing module 903 may be configured to acquire light passing through the second region and light emitted by the second region through the off-screen camera to obtain a target image. The display module 902 may be further configured to display the target image obtained by the processing module 903 in a first area, where the second area is different from the first area. The first input can be input for selecting a target filter mode in the first area, and the target filter image is a filter image corresponding to the target filter mode; alternatively, the first input may be an input for rendering a filter image in the first region, and the target filter image may be an image obtained by scaling down the rendered filter image.
Optionally, in this embodiment of the present invention, the first input may be an input for selecting a target filter mode in the first area, and the display module 902 may be further configured to display at least one image in the first area before displaying the target filter image in the second area. The receiving module 901 may further be configured to receive a second input to the first image in the at least one image displayed by the displaying module 902. The display module 902 may be specifically configured to display the target filter image in the second area in response to the second input received by the receiving module 901. The target filter image may be an image obtained by scaling down the first image, or an image obtained by performing target editing operation on the first image and scaling down the first image.
Optionally, in this embodiment of the present invention, the first input may be an input for drawing a filter image in the first region. The display module 902 may be further configured to display the rendered filter image in the first area before displaying the target filter image in the second area.
Optionally, in this embodiment of the present invention, the display module 902 may be specifically configured to sequentially display each of the plurality of images in the second area according to a preset period. The receiving module 901 may further be configured to receive a third input to the target control after the display module 902 displays the target image in the first area. The processing module 903 may further be configured to collect, by the off-screen camera, light passing through the second area and light emitted by the second area in response to the third input received by the receiving module 901, so as to obtain the first video.
Optionally, in this embodiment of the present invention, the target filter image may include a plurality of images, and the target image may be a plurality of video frames. The display module 902 may be specifically configured to sequentially display each of the plurality of images in the second area according to a preset period. The processing module 903 may be specifically configured to collect light passing through the second region and light emitted by the second region through the off-screen camera to obtain a plurality of video frames, and synthesize the plurality of video frames to obtain a second video. The display module 902 may be specifically configured to display the plurality of video frames obtained by the processing module 903.
The electronic device provided by the embodiment of the present invention can implement each process implemented by the electronic device in the above method embodiments, and is not described herein again to avoid repetition.
The embodiment of the invention provides electronic equipment, which can display a target filter image in a second area corresponding to an off-screen camera in a screen, so that the off-screen camera can collect external light penetrating through the second area and light emitted by a first area when the second area displays the target filter image, the off-screen camera can generate a target image according to the two collected lights, and the target image with a filter effect is displayed in the first area of the electronic equipment. In this way, the filter addition process is simplified.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention. As shown in fig. 10, the electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 10 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like. It should be noted that the electronic device provided in the embodiment of the present invention may include an off-screen camera.
The processor 110 may be configured to control the user input unit 107 to receive a first input to a first area in the screen.
The processor 110 may be further configured to control the display unit 106 to display the target filter image in a second area in response to the first input, where the second area is an area of the screen corresponding to the off-screen camera and is different from the first area; acquiring light rays passing through the second area and light rays emitted by the second area through the camera under the screen to obtain a target image; and controls the display unit 106 to display the target image in the first area.
In the embodiment of the invention, the first input is input for selecting a target filter mode in the first area, and the target filter image is a filter image corresponding to the target filter mode; alternatively, the first input is an input for rendering a filter image in the first region, and the target filter image is an image obtained by scaling down the rendered filter image.
It can be understood that, in the embodiment of the present invention, the receiving module 901 in the structural schematic diagram of the electronic device (for example, fig. 9) may be implemented by the user input unit 107, the display module 902 may be implemented by the display unit 106, and the processing module 903 may be implemented by the processor 110.
The embodiment of the invention provides electronic equipment, which can display a target filter image in a second area corresponding to an off-screen camera in a screen, so that the off-screen camera can collect external light penetrating through the second area and light emitted by a first area when the second area displays the target filter image, the off-screen camera can generate a target image according to the two collected lights, and the target image with a filter effect is displayed in the first area of the electronic equipment. In this way, the filter addition process is simplified.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 10, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; optionally, the processor 110 may integrate an application processor and a modem processor, wherein the application processor mainly handles the operating system, the first user interface, the application program, and the like, and the modem processor mainly handles the wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power supply 111 (e.g., a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 100 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides an electronic device, which includes the processor 110 shown in fig. 10, the memory 109, and a computer program stored in the memory 109 and capable of being executed on the processor 110, where the computer program, when executed by the processor 110, implements each process of the foregoing filter adding method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the filter adding method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may include a read-only memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A filter adding method is applied to electronic equipment comprising an under-screen camera, and is characterized by comprising the following steps:
receiving a first input to a first area in a screen;
in response to the first input, displaying a target filter image in a second area, the second area being an area of the screen corresponding to the off-screen camera, the second area being different from the first area;
acquiring light penetrating through the second area and light emitted by the second area through the under-screen camera to obtain a target image, and displaying the target image in the first area;
the first input is input for selecting a target filter mode in the first area, and the target filter image is a filter image corresponding to the target filter mode; or, the first input is an input for drawing a filter image in the first region, and the target filter image is an image obtained by scaling down the drawn filter image.
2. The method of claim 1, wherein the first input is an input to select the target filter pattern at the first region;
before the target filter image is displayed in the second area, the method further comprises:
displaying at least one image in the first area;
receiving a second input to a first image of the at least one image;
the displaying the target filter image in the second area includes:
displaying the target filter image in the second area in response to the second input;
the target filter image is an image obtained by scaling down the first image, or an image obtained by performing target editing operation on the first image and scaling down the first image.
3. The method of claim 1, wherein the first input is an input to render a filter image at the first region;
before the target filter image is displayed in the second area, the method further comprises:
and displaying the drawn filter image in the first area.
4. The method of any one of claims 1 to 3, wherein the target filter image comprises a plurality of images, the target image being a preview image;
the displaying the target filter image in the second area includes:
sequentially displaying each image in the plurality of images in the second area according to a preset period;
after the target image is displayed in the first area, the method further comprises:
receiving a third input to the target control;
and responding to the third input, and acquiring light penetrating through the second area and light emitted by the second area through the camera under the screen to obtain a first video.
5. The method of any one of claims 1 to 3, wherein the target filter image comprises a plurality of images, the target image being a plurality of video frames;
the displaying the target filter image in the second area includes:
sequentially displaying each image in the plurality of images in the second area according to a preset period;
the acquiring of the light penetrating through the second area and the light emitted by the second area through the camera under the screen obtains a target image, and displays the target image in the first area, including:
and acquiring light penetrating through the second area and light emitted by the second area by the camera under the screen to obtain a plurality of video frames, displaying the plurality of video frames, and synthesizing the plurality of video frames to obtain a second video.
6. An electronic device comprises an under-screen camera, and is characterized in that the electronic device comprises a receiving module, a display module and a processing module;
the receiving module is used for receiving a first input to a first area in a screen;
the display module is configured to display a target filter image in a second area in response to the first input received by the receiving module, where the second area is an area of the screen corresponding to the off-screen camera and is different from the first area;
the processing module is used for acquiring light penetrating through the second area and light emitted by the second area through the under-screen camera to obtain a target image;
the display module is further configured to display the target image obtained by the processing module in the first area;
the first input is input for selecting a target filter mode in the first area, and the target filter image is a filter image corresponding to the target filter mode; or, the first input is an input for drawing a filter image in the first region, and the target filter image is an image obtained by scaling down the drawn filter image.
7. The electronic device of claim 6, wherein the first input is an input to select the target filter pattern in the first region;
the display module is further configured to display at least one image in the first area before the target filter image is displayed in the second area;
the receiving module is further used for receiving a second input of a first image in the at least one image displayed by the display module;
the display module is specifically configured to display the target filter image in the second area in response to the second input received by the receiving module;
the target filter image is an image obtained by scaling down the first image, or an image obtained by performing target editing operation on the first image and scaling down the first image.
8. The electronic device of claim 6, wherein the first input is an input to render a filter image at the first region;
the display module is further configured to display the drawn filter image in the first area before the target filter image is displayed in the second area.
9. The electronic device of any of claims 6-8, wherein the target filter image comprises a plurality of images, and wherein the target image is a preview image;
the display module is specifically configured to sequentially display each of the plurality of images in the second area according to a preset period;
the receiving module is further configured to receive a third input to a target control after the display module displays the target image in the first area;
the processing module is further configured to acquire, by responding to the third input received by the receiving module, light penetrating through the second area and light emitted by the second area through the off-screen camera, so as to obtain a first video.
10. The electronic device of any of claims 6-8, wherein the target filter image comprises a plurality of images, the target image being a plurality of video frames;
the display module is specifically configured to sequentially display each of the plurality of images in the second area according to a preset period;
the processing module is specifically configured to acquire light penetrating through the second area and light emitted by the second area through the off-screen camera to obtain a plurality of video frames, and synthesize the plurality of video frames to obtain a second video;
the display module is specifically configured to display the plurality of video frames obtained by the processing module.
CN201911379549.5A 2019-12-27 2019-12-27 Filter adding method and electronic equipment Active CN111083374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911379549.5A CN111083374B (en) 2019-12-27 2019-12-27 Filter adding method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911379549.5A CN111083374B (en) 2019-12-27 2019-12-27 Filter adding method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111083374A true CN111083374A (en) 2020-04-28
CN111083374B CN111083374B (en) 2021-09-28

Family

ID=70318740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911379549.5A Active CN111083374B (en) 2019-12-27 2019-12-27 Filter adding method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111083374B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113747240A (en) * 2021-09-10 2021-12-03 荣耀终端有限公司 Video processing method, apparatus, storage medium, and program product
CN113810640A (en) * 2021-08-12 2021-12-17 荣耀终端有限公司 Video processing method and device and electronic equipment
CN114339077A (en) * 2022-01-28 2022-04-12 维沃移动通信有限公司 Imaging method, imaging device, electronic apparatus, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120069042A1 (en) * 2010-09-21 2012-03-22 Sony Ericsson Mobile Communications Japan, Inc. Sensor-equipped display apparatus and electronic apparatus
CN106603772A (en) * 2017-01-26 2017-04-26 广东欧珀移动通信有限公司 Electronic device and image acquisition method
CN109309783A (en) * 2017-07-28 2019-02-05 益富可视精密工业(深圳)有限公司 Electronic device and its filter image pickup method
WO2019023957A1 (en) * 2017-08-02 2019-02-07 深圳传音通讯有限公司 Image capturing method and image capturing system of intelligent terminal
CN109714532A (en) * 2018-12-29 2019-05-03 联想(北京)有限公司 Image-pickup method, treating method and apparatus
CN111355879A (en) * 2018-12-24 2020-06-30 北京小米移动软件有限公司 Image acquisition method and device containing special effect pattern and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120069042A1 (en) * 2010-09-21 2012-03-22 Sony Ericsson Mobile Communications Japan, Inc. Sensor-equipped display apparatus and electronic apparatus
CN102411878A (en) * 2010-09-21 2012-04-11 索尼爱立信移动通信日本株式会社 Sensor-equipped display apparatus and electronic apparatus
CN106603772A (en) * 2017-01-26 2017-04-26 广东欧珀移动通信有限公司 Electronic device and image acquisition method
CN109309783A (en) * 2017-07-28 2019-02-05 益富可视精密工业(深圳)有限公司 Electronic device and its filter image pickup method
WO2019023957A1 (en) * 2017-08-02 2019-02-07 深圳传音通讯有限公司 Image capturing method and image capturing system of intelligent terminal
CN111355879A (en) * 2018-12-24 2020-06-30 北京小米移动软件有限公司 Image acquisition method and device containing special effect pattern and electronic equipment
CN109714532A (en) * 2018-12-29 2019-05-03 联想(北京)有限公司 Image-pickup method, treating method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
顾云等: "异形画面作为背景在新闻节目中的应用", 《现代电视技术》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113810640A (en) * 2021-08-12 2021-12-17 荣耀终端有限公司 Video processing method and device and electronic equipment
WO2023016067A1 (en) * 2021-08-12 2023-02-16 荣耀终端有限公司 Video processing method and apparatus, and electronic device
CN113747240A (en) * 2021-09-10 2021-12-03 荣耀终端有限公司 Video processing method, apparatus, storage medium, and program product
CN113747240B (en) * 2021-09-10 2023-04-07 荣耀终端有限公司 Video processing method, apparatus and storage medium
CN114339077A (en) * 2022-01-28 2022-04-12 维沃移动通信有限公司 Imaging method, imaging device, electronic apparatus, and storage medium

Also Published As

Publication number Publication date
CN111083374B (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN111541845B (en) Image processing method and device and electronic equipment
CN110913132B (en) Object tracking method and electronic equipment
CN109639970B (en) Shooting method and terminal equipment
CN110891144B (en) Image display method and electronic equipment
CN108495029B (en) Photographing method and mobile terminal
CN111031398A (en) Video control method and electronic equipment
CN110933306A (en) Method for sharing shooting parameters and electronic equipment
CN111083374B (en) Filter adding method and electronic equipment
CN111010523B (en) Video recording method and electronic equipment
CN111010511B (en) Panoramic body-separating image shooting method and electronic equipment
CN110769174B (en) Video viewing method and electronic equipment
CN109618218B (en) Video processing method and mobile terminal
CN109246351B (en) Composition method and terminal equipment
CN110798621A (en) Image processing method and electronic equipment
CN111597370A (en) Shooting method and electronic equipment
CN111182211B (en) Shooting method, image processing method and electronic equipment
CN111124231B (en) Picture generation method and electronic equipment
CN111031221B (en) Shooting method and electronic equipment
CN110022445B (en) Content output method and terminal equipment
CN110086998B (en) Shooting method and terminal
CN111064888A (en) Prompting method and electronic equipment
CN109104573B (en) Method for determining focusing point and terminal equipment
CN111064896A (en) Device control method and electronic device
CN108156386B (en) Panoramic photographing method and mobile terminal
CN108345657B (en) Picture screening method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant