CN113810602B - Shooting method and electronic equipment - Google Patents

Shooting method and electronic equipment Download PDF

Info

Publication number
CN113810602B
CN113810602B CN202110926567.1A CN202110926567A CN113810602B CN 113810602 B CN113810602 B CN 113810602B CN 202110926567 A CN202110926567 A CN 202110926567A CN 113810602 B CN113810602 B CN 113810602B
Authority
CN
China
Prior art keywords
lut
preview image
image
template
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110926567.1A
Other languages
Chinese (zh)
Other versions
CN113810602A (en
Inventor
王晨清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110926567.1A priority Critical patent/CN113810602B/en
Publication of CN113810602A publication Critical patent/CN113810602A/en
Application granted granted Critical
Publication of CN113810602B publication Critical patent/CN113810602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

A shooting method and electronic equipment relate to the technical field of shooting, and can adopt LUT to process a first preview image of a shooting preview interface, enrich shooting effects obtained by video recording and meet shooting requirements of current users. The method comprises the following steps: the electronic equipment responds to the operation of starting the camera by a user and displays a shooting preview interface; the shooting preview interface comprises a first preview image; the electronic device determines a first target LUT, wherein the first target LUT is one of a plurality of first LUTs; the electronic equipment processes the first preview image according to the first target LUT so as to obtain a second preview image; the preview effect of the first preview image is different from the preview effect of the second preview image.

Description

Shooting method and electronic equipment
Technical Field
The application relates to the technical field of shooting, in particular to a shooting method and electronic equipment.
Background
Currently, more and more people use electronic devices to take pictures and videos to record live spot drops. However, the existing camera of the electronic device has a single function, so that the video style recorded by the camera of the electronic device is single, and the current diversified shooting requirements of users cannot be met.
Disclosure of Invention
The application provides a shooting method and electronic equipment, which can adopt LUT to process a first preview image of a shooting preview interface, enrich shooting effect obtained by video recording and meet shooting requirements of current users.
In a first aspect, an embodiment of the present application provides a photographing method. The method can be applied to electronic equipment comprising a camera and a display screen, wherein the electronic equipment is preset with a plurality of first color lookup tables (LUTs); in the method, firstly, the electronic equipment responds to the operation of starting a camera by a user and displays a shooting preview interface; the shooting preview interface comprises a first preview image; secondly, the electronic device determines a first target LUT, wherein the first target LUT is one of a plurality of first LUTs; finally, the electronic equipment processes the first preview image according to the first target LUT so as to obtain a second preview image; the preview effect of the first preview image is different from the preview effect of the second preview image.
By adopting the scheme, the electronic equipment can process the first preview image of the shooting preview interface according to the first target LUT determined by the electronic equipment in the shooting preview interface to obtain the second preview image; because the preview effects of the first preview image and the second preview image are different, in the video recording process, shooting effects or styles corresponding to different LUTs can be displayed, the effects of the recorded video images can be enriched, and the shooting requirements of current users are met.
In one possible design manner of the first aspect, the electronic device records the video image in response to the user operating the shooting control; the video image is obtained by processing the image acquired by the camera by the electronic equipment according to the first target LUT.
In the design mode, the electronic equipment can process the video image according to the first target LUT in the process of recording the video image, so that the effect of the recorded video image can be enriched, and the shooting requirement of a current user is met.
In one possible design manner of the first aspect, the shooting preview interface is an interface of the electronic device in a movie mode; the film mode is a mode that the electronic equipment records video according to the first target LUT; the electronic equipment responds to the operation of starting the camera by a user, displays a shooting preview interface, and comprises the following steps: the electronic equipment responds to the operation of starting the camera by a user and displays a first interface; the first interface comprises a first control, and the first control is used for indicating a first application calling the camera to enter a film mode; the electronic equipment responds to the operation of a user on the first control, and a shooting preview interface is displayed; the first preview image is an image obtained by processing the image acquired by the camera according to the default LUT.
In the design mode, a film mode is newly added in the shooting mode of the electronic equipment, and after the electronic equipment enters the film mode, the first preview image of the shooting preview interface can be processed by adopting a default LUT, so that the effect of the recorded video image can be enriched, and the shooting requirement of a current user is met.
In one possible design manner of the first aspect, the shooting preview interface includes first indication information, where the first indication information is used to indicate that a user adjusts a state of a display screen of the electronic device to be a landscape screen state.
In the design mode, a user can adjust the state of the display screen of the electronic equipment to be a horizontal screen state under the prompt of the first indication information, so that the video effect of shooting the electronic equipment in the horizontal screen state is better.
In one possible design of the first aspect, the shooting preview interface includes a second control; the electronic device determining a first target LUT comprising: the electronic equipment responds to the operation of a user on the second control, and a plurality of options of the first LUTs are displayed; the electronic device determines a first target LUT in response to a user operation of one of the plurality of first LUT options.
In the design mode, the user can display the options of the plurality of first LUTs in the electronic equipment according to the second control, so that the first target LUT is determined in the options of the plurality of first LUTs, and the user experience is improved.
In one possible design manner of the first aspect, the electronic device determines a first target LUT, including: the electronic equipment operates a preset AI model according to the first preset label, the second preset label and the third preset label to identify the image information of the first preview image; the first preset label is used for representing the tone corresponding to different gray scale intervals; the second preset label is used for representing the exposure of the first preview image; the third preset label is used for representing scene information of the first preview image; the image information includes scene and brightness; the electronic device automatically matches the corresponding LUT according to the image information of the identified first preview image to determine a first target LUT.
In the design mode, the electronic equipment can identify the scene and the brightness of the first preview image according to the preset AI model, and then automatically match the first target LUT corresponding to the scene and the brightness of the first preview image, so that the effect of the recorded video image can be further enriched, and the shooting requirement of a current user is met.
In one possible design manner of the first aspect, the electronic device includes a setting interface, where the setting interface includes a setting item for running a preset AI model; and running a preset AI model to set through the setting interface.
In the design mode, a user can set whether to run the preset AI model on the setting interface of the electronic equipment, so that the user experience is improved.
In one possible design manner of the first aspect, the method further includes: and the electronic equipment responds to the exit operation of the user on the first target LUT, and the electronic equipment processes the first preview image according to the default LUT.
In this design manner, after the user exits the first target LUT, the electronic device further processes the first preview image according to the default LUT, so as to improve the preview effect of the preview image and further improve the effect of the recorded video image.
In one possible design manner of the first aspect, the electronic device automatically matches the corresponding LUT according to the identified image information of the first preview image to determine the first target LUT, including: the electronic equipment matches the corresponding first LUT according to the image information of the identified first preview image; the electronic equipment responds to the operation of a user on the second control, and a plurality of options of the first LUTs are displayed; the electronic device determines a first target LUT in response to a user operation on one of the plurality of first LUTs.
In the design mode, after the electronic equipment is matched with the first LUT corresponding to the image information of the identified first preview image, if the user does not need the first LUT matched with the electronic equipment, the user can also operate the second control to enable the electronic equipment to display the options of a plurality of first LUTs, and the first target LUT is determined in the plurality of first LUTs, so that the effect of the recorded video image can be enriched, and the shooting requirement of the current user is met.
In a possible design manner of the first aspect, the shooting preview interface further includes a third control; the method further comprises the steps of: the electronic equipment responds to the operation of the user on the third control, and a plurality of options of the second LUT are displayed; wherein; the number of gray levels of the second LUT is greater than the number of gray levels of the first LUT.
In this design, after the user opens the third control, the electronic device displays a plurality of options of the second LUT; because the gray level number of the second LUT is larger than that of the first LUT, the preview effect of the preview image can be further improved when the preview image is processed by adopting the second LUT, so that the effect of the recorded video image can be further enriched, and the shooting requirement of a current user is met.
In one possible design manner of the first aspect, when the electronic device runs the preset AI model, the electronic device displays the second interface; the second interface is a dynamic interface for the electronic equipment to recognize the first preview image; the second interface includes a plurality of bubbles; any two bubbles of the plurality of bubbles are different in size and transparency.
In the design mode, when the electronic equipment runs the preset AI model, the interface of the electronic equipment also displays a dynamic effect picture of the identification process, thereby being beneficial to improving the experience of users.
In one possible design manner of the first aspect, the method further includes: when the electronic equipment starts to record the video image, the electronic equipment stops running the preset AI model.
In the design mode, when the electronic equipment starts to record the video image, the electronic equipment stops running the preset AI model, so that the problem that the LUT frequently recommends in the video recording process of the electronic equipment and influences the video shooting effect can be avoided.
In one possible design manner of the first aspect, the method further includes: when the first preview image changes, the electronic equipment displays second indication information; when the second indication information is used for indicating whether the electronic equipment prompts the user to switch the target LUT or not; the electronic equipment responds to the confirmation operation of the user on the second indication information and switches to a second target LUT; or when the second indication information comprises the LUT icon, the electronic equipment responds to the operation of the LUT icon by a user and switches to a second target LUT; the LUT icon shows a LUT that is different from the first target LUT.
In the design mode, when the first preview image changes, the electronic device further displays the second indication information, and the user can determine whether to switch the first target LUT to the second target LUT according to the second indication information, so that the second target LUT processes the changed first preview image, and the user experience is improved.
In one possible design manner of the first aspect, the electronic device processes the first preview image according to the first target LUT to obtain the second preview image, including: the electronic equipment acquires the identification information of the first target LUT; the electronic equipment acquires pixel information of a first preview image; the pixel information comprises gray values of each pixel in the first preview image; and the electronic equipment converts the pixel information according to the identification information to obtain a second preview image.
In the design mode, the electronic equipment can convert the gray value of each pixel in the first preview image according to the identification information of the first target LUT so as to obtain the second preview image, so that different preview effects corresponding to different LUTs can be realized, the preview effect of the preview image can be improved, the effect of the recorded video image is enriched, and the shooting requirement of a current user is met.
In a second aspect, an embodiment of the present application provides a photographing method. The method can be applied to electronic equipment comprising a camera and a display screen, wherein the electronic equipment responds to the operation of starting the camera by a user, views a first image, and automatically displays a third preview image processed by the first image on a shooting preview interface; the electronic equipment views a view to obtain a second image, and automatically displays a fourth preview image processed by the second image on a shooting preview interface; wherein the preview effect of the third preview image is different from the preview effect of the fourth preview image.
By adopting the scheme, the electronic equipment can process different images obtained by framing so as to obtain different preview images corresponding to different images, so that when the framing images of the electronic equipment change, the preview images corresponding to the framing images can be automatically matched, the preview effect of the preview images is enriched, the effect of recorded video images is enriched, and the shooting requirement of a current user is met.
In one possible design manner of the second aspect, the electronic device is preset with a plurality of first color lookup tables LUTs; automatically displaying a first preview image processed by the first image on a shooting preview interface, which specifically comprises the following steps: displaying a third preview image of the first image processed by a third target LUT in the first LUT on a shooting preview interface; automatically displaying a second preview image processed by the second image on a shooting preview interface, which specifically comprises the following steps: displaying a fourth preview image of the second image processed by a fourth target LUT in the first LUT on a shooting preview interface; the third target LUT is different from the fourth target LUT.
In this design, the electronic device may process the first image according to a third target LUT of the first LUTs; and processing the second image according to a fourth target LUT in the first LUT, so that preview images corresponding to different LUTs can be obtained, and the preview effect of the preview images is enriched.
In one possible design manner of the second aspect, the method further includes the electronic device identifying image information of the third preview image according to a running preset AI model of the first preset tag, the second preset tag, and the third preset tag; the electronic equipment matches an LUT corresponding to the image information of the identified third preview image to determine the third target LUT; the first preset label is used for representing the tone corresponding to different gray scale intervals; the second preset label is used for representing the exposure of the third preview image; the third preset label is used for representing scene information of a third preview image; the image information includes scene and brightness; the electronic equipment operates a preset AI model according to the first preset label, the second preset label and the third preset label to identify the image information of the fourth preview image; the electronic equipment matches the LUT corresponding to the image information of the recognized fourth preview image to determine a fourth target LUT; the first preset label is used for representing the tone corresponding to different gray scale intervals; the second preset label is used for representing the exposure of the fourth preview image; the third preset label is used for representing scene information of the fourth preview image; the image information includes a scene and brightness.
In one possible design manner of the second aspect, the electronic device includes a setting interface, where the setting interface includes a setting item for running a preset AI model; and running a preset AI model to set through the setting interface.
In one possible design manner of the second aspect, when the electronic device runs the preset AI model, the electronic device displays a third interface; the third interface is a dynamic interface for the electronic equipment to recognize the third preview image; the third interface includes a plurality of bubbles; any two bubbles in the plurality of bubbles have different sizes and transparencies; or when the electronic equipment runs the preset AI model, the electronic equipment displays a fourth interface; the fourth interface is a dynamic interface for the electronic equipment to recognize the fourth preview image; the fourth interface includes a plurality of bubbles; any two bubbles of the plurality of bubbles are different in size and transparency.
In a third aspect, the present application provides an electronic device comprising a memory, a display screen, one or more cameras, and one or more processors. The memory, display screen, camera are coupled to the processor. Wherein the camera is used for acquiring images, the display screen is used for displaying the images acquired by the camera or the images generated by the processor, and the memory is stored with computer program codes, and the computer program codes comprise computer instructions which, when executed by the processor, cause the electronic device to execute the method according to the first aspect and any one of the possible design modes thereof; alternatively, the method of the second aspect and any one of its possible designs is performed.
In a fourth aspect, the present application provides an electronic device comprising a memory, a display screen, one or more cameras, and one or more processors. The memory, the display screen and the camera are coupled with the processor. Wherein the memory has stored therein computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the steps of: the electronic equipment responds to the operation of starting the camera by a user and displays a shooting preview interface; the shooting preview interface comprises a first preview image; the electronic device determines a first target LUT, wherein the first target LUT is one of a plurality of first LUTs; the electronic equipment processes the first preview image according to the first target LUT so as to obtain a second preview image; the preview effect of the first preview image is different from the preview effect of the second preview image.
In one possible design of the fourth aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic equipment responds to the operation of a user on a shooting control, and video images are recorded; the video image is obtained by processing the image acquired by the camera by the electronic equipment according to the first target LUT.
In a possible design manner of the fourth aspect, the shooting preview interface is an interface of the electronic device in a movie mode; the film mode is a mode that the electronic equipment records video according to the first target LUT; the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic equipment responds to the operation of starting the camera by a user and displays a first interface; the first interface comprises a first control, and the first control is used for indicating a first application calling the camera to enter a film mode; the electronic equipment responds to the operation of a user on the first control, and a shooting preview interface is displayed; the first preview image is an image obtained by processing the image acquired by the camera according to the default LUT.
In one possible design manner of the fourth aspect, the shooting preview interface includes first indication information, where the first indication information is used to indicate that a user adjusts a state of a display screen of the electronic device to be a landscape screen state.
In one possible design of the fourth aspect, the shooting preview interface includes a second control; the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic equipment responds to the operation of a user on the second control, and a plurality of options of the first LUTs are displayed; the electronic device determines a first target LUT in response to a user operation of one of the plurality of first LUT options.
In one possible design of the fourth aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic equipment operates a preset AI model according to the first preset label, the second preset label and the third preset label to identify the image information of the first preview image; the first preset label is used for representing the tone corresponding to different gray scale intervals; the second preset label is used for representing the exposure of the first preview image; the third preset label is used for representing scene information of the first preview image; the image information includes scene and brightness; the electronic device automatically matches the corresponding LUT according to the image information of the identified first preview image to determine a first target LUT.
In a possible design manner of the fourth aspect, the electronic device includes a setting interface, where the setting interface includes a setting item for running a preset AI model; and running a preset AI model to set through the setting interface.
In one possible design of the fourth aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: and the electronic equipment responds to the exit operation of the user on the first target LUT, and the electronic equipment processes the first preview image according to the default LUT.
In one possible design of the fourth aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic equipment matches the corresponding first LUT according to the image information of the identified first preview image; the electronic equipment responds to the operation of a user on the second control, and a plurality of options of the first LUTs are displayed; the electronic device determines a first target LUT in response to a user operation on one of the plurality of first LUTs.
In a possible design manner of the fourth aspect, the shooting preview interface further includes a third control; the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic equipment responds to the operation of the user on the third control, and a plurality of options of the second LUT are displayed; wherein; the number of gray levels of the second LUT is greater than the number of gray levels of the first LUT.
In one possible design manner of the fourth aspect, when the electronic device runs the preset AI model, the electronic device displays the second interface; the second interface is a dynamic interface for the electronic equipment to recognize the first preview image; the second interface includes a plurality of bubbles; any two bubbles of the plurality of bubbles are different in size and transparency.
In one possible design of the fourth aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: when the electronic equipment starts to record the video image, the electronic equipment stops running the preset AI model.
In one possible design of the fourth aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: when the first preview image changes, the electronic equipment displays second indication information; when the second indication information is used for indicating whether the electronic equipment prompts the user to switch the target LUT or not; the electronic equipment responds to the confirmation operation of the user on the second indication information and switches to a second target LUT; or when the second indication information comprises the LUT icon, the electronic equipment responds to the operation of the LUT icon by a user and switches to a second target LUT; the LUT icon shows a LUT that is different from the first target LUT.
In one possible design of the fourth aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic equipment acquires the identification information of the first target LUT; the electronic equipment acquires pixel information of a first preview image; the pixel information comprises gray values of each pixel in the first preview image; and the electronic equipment converts the pixel information according to the identification information to obtain a second preview image.
In a fifth aspect, the present application provides an electronic device comprising a memory, a display screen, one or more cameras, and one or more processors. The memory, the display screen and the camera are coupled with the processor. Wherein the memory has stored therein computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the steps of: the electronic equipment responds to the operation of starting the camera by a user, views a first image, and automatically displays a third preview image processed by the first image on a shooting preview interface; the electronic equipment views a view to obtain a second image, and automatically displays a fourth preview image processed by the second image on a shooting preview interface; wherein the preview effect of the third preview image is different from the preview effect of the fourth preview image.
In one possible design manner of the fifth aspect, the electronic device is preset with a plurality of first color lookup tables LUTs; the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: displaying a third preview image of the first image processed by a third target LUT in the first LUT on a shooting preview interface; displaying a fourth preview image of the second image processed by a fourth target LUT in the first LUT on a shooting preview interface; the third target LUT is different from the fourth target LUT.
In one possible design of the fifth aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic equipment operates a preset AI model according to the first preset label, the second preset label and the third preset label to identify the image information of the third preview image; the electronic equipment matches an LUT corresponding to the image information of the identified third preview image to determine the third target LUT; the first preset label is used for representing the tone corresponding to different gray scale intervals; the second preset label is used for representing the exposure of the third preview image; the third preset label is used for representing scene information of a third preview image; the image information includes scene and brightness; when the electronic equipment runs a preset AI model, the electronic equipment identifies the image information of the fourth preview image according to the first preset label, the second preset label and the third preset label; the electronic equipment matches the LUT corresponding to the image information of the recognized fourth preview image to determine a fourth target LUT; the first preset label is used for representing the tone corresponding to different gray scale intervals; the second preset label is used for representing the exposure of the fourth preview image; the third preset label is used for representing scene information of the fourth preview image; the image information includes a scene and brightness.
In a possible design manner of the fifth aspect, the electronic device includes a setting interface, where the setting interface includes a setting item for running a preset AI model; and running a preset AI model to set through the setting interface.
In one possible design manner of the fifth aspect, when the electronic device runs the preset AI model, the electronic device displays a third interface; the third interface is a dynamic interface for the electronic equipment to recognize the third preview image; the third interface includes a plurality of bubbles; any two bubbles in the plurality of bubbles have different sizes and transparencies; or when the electronic equipment runs the preset AI model, the electronic equipment displays a fourth interface; the fourth interface is a dynamic interface for the electronic equipment to recognize the fourth preview image; the fourth interface includes a plurality of bubbles; any two bubbles of the plurality of bubbles are different in size and transparency.
In a sixth aspect, the present application provides a computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform a method as described in the first aspect and any one of its possible designs; or performing the method as described in the second aspect and any one of its possible designs.
In a seventh aspect, the present application provides a computer program product which, when run on a computer, causes the computer to perform the method according to the first aspect and any one of the possible designs; or performing the method as described in the second aspect and any one of its possible designs. The computer may be the electronic device described above.
It may be appreciated that the electronic device according to the third aspect, the fourth aspect and any possible design manner thereof, the electronic device according to the fifth aspect and any possible design manner thereof, the computer storage medium according to the sixth aspect, and the computer program product according to the seventh aspect may refer to the advantages of the first aspect and any possible design manner thereof, and are not described herein.
Drawings
Fig. 1 is a schematic diagram of shooting effects or styles corresponding to a plurality of LUTs provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a preview interface of a recorded video of a mobile phone according to an embodiment of the present application;
fig. 3 is a schematic diagram of setting video HDR10 in a video mode of a mobile phone according to an embodiment of the present application;
Fig. 4 is a schematic diagram of setting a professional video HDR10 in a professional mode of a mobile phone according to an embodiment of the present application;
fig. 5 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 6 is a flowchart of a photographing method according to an embodiment of the present application;
fig. 7 is a flowchart of another photographing method according to an embodiment of the present application;
fig. 8 is a flowchart of another photographing method according to an embodiment of the present application;
fig. 9 is a flowchart of another photographing method according to an embodiment of the present application;
fig. 10 is a flowchart of another photographing method according to an embodiment of the present application;
fig. 11 is a flowchart of another photographing method according to an embodiment of the present application;
fig. 12 is a flowchart of another photographing method according to an embodiment of the present application;
fig. 13 is a flowchart of another photographing method according to an embodiment of the present application;
fig. 14 is a flowchart of another photographing method according to an embodiment of the present application;
fig. 15 is a flowchart of another photographing method according to an embodiment of the present application;
fig. 16 is a schematic software structure of an electronic device according to an embodiment of the present application;
Fig. 17 is a flowchart of another photographing method according to an embodiment of the present application;
fig. 18 is a flowchart of another photographing method according to an embodiment of the present application;
fig. 19 is a flowchart of another photographing method according to an embodiment of the present application;
fig. 20 is a flowchart of another photographing method according to an embodiment of the present application;
fig. 21 is a flowchart of still another photographing method according to an embodiment of the present application;
fig. 22 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. Wherein, in the description of the embodiments of the present application, "/" means or is meant unless otherwise indicated, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
For ease of understanding, the terms referred to in the embodiments of the present application are presented herein:
1) User experience (UX): may also be referred to as UX feature, which refers to the perception of a user using an electronic device during a shooting process.
2) Film mode: refers to a mode in which the electronic device records video. In the embodiment of the application, the movie mode includes a 4K high-dynamic range (HDR) function and a look-up table (LUT) function, and when a user selects the movie mode to record video, the recorded video can have the texture of a movie, so that the picture is more stereoscopic.
3) 4K HDR: the HDR technology is added on the basis of 4K resolution, so that photographed pictures can show more real effects and are more similar to real pictures visible to human eyes. In some embodiments, the bright portions of the picture taken with the 4K HDR functionality are not overexposed and the dark portion details are clearly visible.
4) LUT: which may also be referred to as LUT files or LUT parameters, is a color conversion template, such as a mapping table of Red Green Blue (RGB). The LUT can change the gray value of the pixel actually sampled into another gray value corresponding to the gray value through certain conversion (such as threshold value, inversion, contrast adjustment, linear conversion and the like), so that the LUT can play a role in highlighting the useful information of the image and enhancing the optical contrast of the image.
An image includes a number of pixels, each represented by RGB values. The display screen of the electronic device may display the image according to the RGB values for each pixel in the image. That is, these RGB values will indicate how the display will illuminate to blend in a wide variety of colors for presentation to the user.
The LUT is an RGB mapping table, which is used to characterize the correspondence between RGB values before and after adjustment. For example, please refer to table 1, which shows an example of a LUT.
TABLE 1
Figure GDA0004122828730000081
When the original RGB value is (14, 22, 24), the output RGB value is (6,9,4) through the mapping of the LUT shown in table 1. When the original RGB values are (61, 34, 67), the output RGB values are (66, 17, 47) through the mapping of the LUT shown in table 1. When the original RGB values are (94, 14, 171), the output RGB values are (117, 82, 187) through the mapping of the LUT shown in table 1. When the original RGB values are (241, 216, 222), the output RGB values are (255, 247, 243) through the mapping of the LUT shown in table 1.
When the same image is processed by using different LUTs, different style image effects can be obtained. For example, LUT1, LUT2, and LUT3 shown in fig. 1 are different color lookup tables. The LUT1 is used to process the original image 100 acquired by the camera to obtain the image 101 shown in fig. 1. The LUT2 is used to process the raw image 100 acquired by the camera to obtain the image 102 shown in fig. 1. The LUT3 is used to process the original image 100 acquired by the camera to obtain the image 103 shown in fig. 1. Comparing the image 101, the image 102, and the image 103 shown in fig. 1, it can be seen that the image 101, the image 102, and the image 103 have different image effects or styles.
In the conventional technology, when the electronic equipment records video, the recording function of the electronic equipment does not comprise a 4K HDR function and an LUT function; moreover, the shooting mode of the electronic equipment does not include a movie mode, so that the style or effect of the video recorded by the electronic equipment is single, the current diversified shooting requirements of users cannot be met, and the user experience is poor.
The embodiment of the application provides a shooting method which can be applied to electronic equipment comprising a camera and a display screen. The method can enrich shooting effects obtained by video recording, can meet the diversified shooting requirements of current users, and further improves user experience.
The camera of the electronic device includes a first function and a second function. When the user selects the first function, the electronic device can present video effects or styles corresponding to different LUTs in the video recording process. And when the user selects the second function at the same time, the definition of the image shot by the electronic equipment can be higher and more vivid.
The first function may be, for example, a LUT function; the second function may be, for example, a 4K HDR function.
Taking the electronic device as a mobile phone as an example, in some embodiments, as shown in fig. 2 (1), the function of recording video by the mobile phone may be implemented in a video recording mode of a camera of the mobile phone. In other embodiments, as shown in fig. 2 (2), the function of recording video by the mobile phone may be implemented in a professional mode of the camera of the mobile phone. In still other embodiments, as shown in fig. 2 (3), the function of recording video by the mobile phone may be implemented in a movie mode of the camera of the mobile phone.
As also shown in fig. 2 (1), in some embodiments, the cell phone displays an interface 201 in video mode. In other embodiments, in interface 201, the video mode includes a LUT control for initiating LUT functionality. It should be noted that (1) in fig. 2 in the embodiment of the present application does not illustrate the LUT control.
In some embodiments, the video mode further includes an HDR setup for initiating a 4K HDR function. For example, as shown in fig. 3 (1), after the mobile phone enters the video mode, the mobile phone displays the interface 201 in the video mode. The interface 201 includes a setting item 206 therein. In response to the user's operation of the setting item 206, the cellular phone displays a setting interface 207 as shown in (2) in fig. 3, the setting interface 207 including a "photo scale" setting item, a "sound control photographing" setting item, a "smiling face snapshot" setting item, a "video resolution" setting item, a "video frame rate" setting item, a "video HDR10" setting item 208, a "high-efficiency video format" setting item, an "AI movie hue" setting item, and the like. In response to a user's start operation of the "video HDR10" setting item 208, the handset displays a setting interface 209 as shown in fig. 3 (3). In this setup interface 209, the "video HDR10" setup item is turned on, i.e. the handset has enabled the 4K HDR function in video mode.
As also shown in fig. 2 (2), the handset displays the interface 202 in the professional mode. The interface 202 includes a LUT control 203 for initiating LUT functionality. After the user turns on the LUT control 203, the handset starts the LUT function. In some embodiments, interface 202 also includes LOG control a for launching LOG functions. Since the video image shot by the LOG function is gray in color by the mobile phone camera and the video image shot by the LUT function is rich in color, the LOG control a and the LUT control 203 included in the interface 202 cannot be opened at the same time. That is, in the professional mode of the cellular phone camera, the LOG function and the LUT function cannot be simultaneously operated. Note that, the LUT control 203 shown in (2) in fig. 2 is in the off state.
In some embodiments, the professional mode further includes an HDR setup for initiating a 4K HDR function. For example, as shown in fig. 4 (1), after the mobile phone enters the professional mode, the mobile phone displays the interface 203 in the video mode. The interface 203 includes a setting item 206 therein. In response to the user's operation on the setting item 206, the mobile phone displays a setting interface 210 as shown in (2) in fig. 4, where the setting interface 210 includes a "photo scale" setting item, a "sound control photographing" setting item, a "smiling face snapshot" setting item, a "video resolution" setting item, a "video frame rate" setting item, a "professional video HDR10" setting item 211, a "high-efficiency video format" setting item, an "AI movie hue" setting item, and the like. In response to a user's start operation of the "professional video HDR10" setting item 211, the mobile phone displays a setting interface 212 as shown in (3) in fig. 4. In this setup interface 212, the "professional video HDR10" setup is turned on, i.e. the handset has enabled the 4K HDR function in professional mode.
As also shown in fig. 2 (3), the handset displays the interface 204 in movie mode. Included in this interface 204 are LUT controls 203 for launching LUT functionality and 4K HDR controls 205 for launching 4K HDR functionality.
For example, the photographing method provided in the embodiment of the present application may be applied to a tablet computer, a personal computer (personal computer, PC), a personal digital assistant (personal digital assistant, PDA), a smart watch, a netbook, a wearable electronic device, an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, a vehicle-mounted device, an intelligent automobile, an intelligent sound device, and other electronic devices, which is not limited in this embodiment of the present application.
Fig. 5 is a schematic structural diagram of the electronic device 100. Wherein the electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc.
It is to be understood that the structure illustrated in the present embodiment does not constitute a specific limitation on the electronic apparatus 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and command center of the electronic device 100. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only illustrative, and does not limit the structure of the electronic device. In other embodiments, the electronic device may also use different interfacing manners in the foregoing embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode (AMOLED), a flexible light-emitting diode (FLED), a Mini-LED, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the electronic device may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, and so on.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of electronic devices can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110. The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, audio, video, etc. files are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 121. For example, in an embodiment of the present application, the processor 110 may include a storage program area and a storage data area by executing instructions stored in the internal memory 121, and the internal memory 121 may include a storage program area and a storage data area.
The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device (e.g., audio data, phonebook, etc.), and so forth. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc. The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device. The electronic device may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like.
The methods in the following embodiments may be implemented in the electronic device 100 having the above-described hardware structure. In the following embodiments, the electronic device 100 is taken as an example of a mobile phone, and the technical solutions provided in the embodiments of the present application are specifically described.
In the following embodiments, a new movie mode is taken as an example in a mobile phone camera. It should be understood that the movie mode is one mode in which the cell phone records video.
In one possible implementation manner, the user can select the corresponding LUT according to different shooting scenes, so that the styles or effects of the images shot by the different shooting scenes are different, thereby enriching the styles or effects of the shooting of the mobile phone, and enabling the shooting effects to be more diversified and personalized. Wherein, shooting scene can be: a person scene, a travel scene, a food scene, a landscape scene, a person scene, a pet scene, a still scene, or the like. Of course, the shooting scene may be other scenes, which are not listed here.
In some embodiments, as shown in fig. 6 (1), in response to a user operating an icon 301 of a "camera" application in the mobile phone home screen interface, the mobile phone displays an interface 302 as shown in fig. 6 (2). The interface 302 is a preview interface of mobile phone photographing, and the interface 302 further includes a "portrait" mode, a "video" mode, a "movie" mode, and a "professional" mode. In response to the user selecting the "movie" mode 303, the mobile phone displays an interface 304 as shown in fig. 7 (1). The interface 304 is a preview interface before video recording of the mobile phone. In interface 304, the handset displays a prompt 305. The prompt information 305 is used to prompt the user to place the mobile phone in a landscape state. For example, the prompt 305 may be "movie mode cross screen shooting effect is better". Then, when the user places the mobile phone in the landscape state, the mobile phone displays an interface 306 as shown in fig. 7 (2). The interface 306 is a preview interface before video recording in a horizontal screen state of the mobile phone.
Also shown as interface 306 in fig. 7 (2), interface 306 includes 4K HDR control 205 and LUT control 203. In some embodiments, as shown in fig. 8 (1), in response to a user operation of LUT control 203, the handset displays an interface 307 as shown in fig. 8 (2). The interface 307 includes LUT templates 308; the LUT templates 308 include LUT1, LUT2, LUT3, and LUT8, among others.
Note that, in the case where the mobile phone camera enters the movie mode, the LUT control 203 is always in an on state. In other words, after the mobile phone camera enters the movie mode, the LUT function is turned on by default, and the default LUT is selected to process the preview image. In some embodiments, the default LUT is typically LUT1.
In the present embodiment, the color depth (which may also be referred to as hue) of different LUTs is different. Illustratively, from LUT1 to LUT8, the tone of the LUT is gradually changed from a warm tone to a cool tone in sequence; alternatively, from LUT1 to LUT8, the tone of the LUT gradually changes from the cool tone to the warm tone in order. The embodiments of the present application are not limited in this regard.
In some embodiments, LUT1 may be named "city of things," LUT2 may be named "early spring of cherry," LUT3 may be named "nine zero years," and LUT4 may be named "twilight city. Of course, other LUTs included in the LUT template 308 may also be named and are not listed here. In addition, the above-described naming of the LUT is merely an example of the embodiment of the present application, and is not meant to limit the present application.
In some embodiments, the preview interface includes a preview image captured by a camera of the cell phone. The preview image may be a person image, a travel image, a landscape image, a pet image, or the like. The user may select different LUTs based on different preview images. For example, when the preview image is a person scene, the user may select LUT1 (city of travel) to process the preview image to obtain a corresponding photographing effect or style. When the preview image is a travel scene, the user can select LUT2 (early spring of cherry) to process the preview image to obtain a corresponding photographing effect or style. It can be understood that under different preview images, the corresponding LUT can be adopted to process the preview images to obtain corresponding shooting effects or styles, so that the styles and effects of the shooting images of the mobile phone are enriched, and the diversified shooting requirements of the current user are met.
Taking the LUT1 as an example of processing the preview image by the user, when the user processes the preview image by using the LUT1, the mobile phone may transform the RGB value (i.e., the gray value) of each of the plurality of pixels included in the unprocessed preview image to a certain extent, so that the RGB value of each of the plurality of pixels included in the processed preview image is different from the RGB value of each of the plurality of pixels included in the unprocessed preview image, that is, the brightness of each of the plurality of pixels included in the processed preview image is different from the brightness of each of the plurality of pixels included in the unprocessed preview image 402, thereby playing a role of highlighting the useful information of the image and enhancing the optical contrast of the image, so that the color of the processed preview image is darker and the figure outline is clearer.
In some embodiments, in response to a user operation of 4K HDR control 205, based on interface 307 shown in fig. 8 (2), the handset displays interface 309 shown in fig. 8 (3). Wherein the interface 309 includes LUT templates 310; the LUT templates 310 include LUT9, LUT10, LUT11, and LUT16.
Note that the 4K HDR control 205 shown in (2) in fig. 8 and the 4K HDR control 205 shown in (3) in fig. 8 are in different states. For example, 4K HDR control 205 shown in (2) in fig. 8 is in an off state, and 4K HDR control 205 shown in (3) in fig. 8 is in an on state.
In some embodiments, LUT templates 308 are 8bit templates when 4K HDR control 205 is in an off state; LUT templates 310 are 10bit templates when 4K HDR control 205 is in an on state. Wherein, 8 bits represent 256 gray levels, 10 bits represent 1024 gray levels, the more the gray level number is, the finer the color is, and the more uniform and natural the color transition is. Thus, when the user selects LUT template 310, i.e., the user opens 4K HDR control 205, selecting the 4K HDR function, the image captured by the phone will be clearer and the color will be more vivid.
Illustratively, when the 4K HDR control 205 is actuated by the user, the LUT templates 310 are 10bit templates, i.e., the LUTs included in the LUT templates 310 are all 10bit, so that each LUT included in the LUT templates 310 can display 1024 gray scales. In this way, when the LUT in the LUT template 310 is used to process the image, the mobile phone may perform a certain transformation on the RGB value of each of the plurality of pixels included in the unprocessed preview image, so that the RGB value of each of the plurality of pixels included in the processed image is different from the RGB value of each of the pixels of the previous image, and, since the LUT template 310 is a 10bit template, after the LUT in the LUT template 310 is converted, the color of the processed image is richer, the color transition is more uniform and natural, and the preview image processed by the mobile phone using the LUT of 10 bits is clearer.
The LUT1, LUT2, LUT3, and LUT 4 included in the LUT template 308 correspond to the LUT9, LUT10, LUT11, and LUT16 included in the LUT template 310 one by one. For example, LUT1 corresponds to LUT9, LUT2 corresponds to LUT10, LUT3 corresponds to LUT11, LUT8 corresponds to LUT16, and so on. In other words, when LUT1 is "city address", LUT9 is also "city address"; however, since the LUT9 is 10 bits and the LUT1 is 8 bits, the LUT9 has finer color and more uniform and natural color transition than the LUT 1.
In other embodiments, the handset further includes an HDR setup for enabling 4K HDR functionality. For example, as shown in fig. 9 (1), the mobile phone displays an interface 306 in the movie mode, and the interface 306 is a preview interface before the mobile phone enters the video recording in the movie mode. The interface 306 includes a setting item 401, and in response to a user's operation of the setting item 401, the cellular phone displays a setting interface 402 as shown in (2) in fig. 9. The setting interface 402 includes a "photo scale" setting item, a "sound control photographing" setting item, a "smiling face snapshot" setting item, a "video resolution" setting item, a "video frame rate" setting item, a "movie HDR10" setting item, a "high-efficiency video format" setting item, an "AI movie hue" setting item 403, and the like. In response to a user start operation of the "movie HDR10" setting item, the "movie HDR10" setting item is turned on, i.e. the handset starts the 4K HDR function.
Considering that when a user shoots, if a shooting scene is complex, the user does not know which LUT effect to select is better in many times, based on this, in another possible implementation manner, the user can select an artificial intelligence (artificial intelligence, AI) model, identify a preview image included in a preview interface before the mobile phone video recording by the AI model, and match an LUT corresponding to the preview image, so that the mobile phone can automatically select the LUT according to the identified preview image, and the user selection is not needed while enriching the style or effect of mobile phone shooting, thereby further improving the user experience. Wherein the AI model may be any machine model for identifying the preview image. For example, the AI model may be any of the following neural network models: VGG-net, resnet, and Lenet.
In some embodiments, as shown in fig. 6 (1), in response to a user operating an icon 301 of a "camera" application in the mobile phone home screen interface, the mobile phone displays an interface 302 as shown in fig. 6 (2). The interface 302 is a preview interface of mobile phone photographing, and the interface 302 includes a "portrait" mode, a "video" mode, a "movie" mode, and a "professional" mode. In response to the user selecting the "movie" mode 303, the mobile phone displays an interface 306 as shown in fig. 9 (1). The interface 306 is a preview interface before video recording of the mobile phone.
Also shown as interface 306 in fig. 9 (1), interface 306 includes 4K HDR control 205, LUT control 203, and settings items 401. In some embodiments, as shown in fig. 9 (1), in response to the user's operation of the setting item 401, the mobile phone displays a setting interface 402 as shown in fig. 9 (2), the setting interface 402 including a "photo scale" setting item, a "sound control photographing" setting item, a "smiling face snapshot" setting item, a "video resolution" setting item, a "video frame rate" setting item, a "movie HDR10" setting item, a "high efficiency video format" setting item, and a "AI movie hue" setting item 403, and the like. In response to the user's activation operation of the "AI movie tint" setting item 403, the mobile phone displays a setting interface 404 as shown in fig. 9 (3), in which setting interface 404 the "AI movie tint" setting item is opened, i.e., the mobile phone activates the AI model recognition preview image.
In some embodiments, as shown in fig. 10 (1), in the interface 404, after the AI movie tint "setting item is opened, the handset displays an interface 405 as shown in fig. 10 (2) in response to a user's return operation to the interface 404 (e.g., a user clicking on a return arrow of the setting item in the interface 404). Wherein the interface 405 identifies a dynamic interface of the preview image for the AI model, the interface 405 includes indication information for characterizing the identification process. The indication information may be, for example, an icon. Such as circles of different sizes and different transparency shown in interface 405 of fig. 10 (2).
Then, after the recognition process is finished, the mobile phone displays an interface 406 as shown in (3) in fig. 10, and the interface 406 is a LUT pattern matched with the preview image after the preview image is recognized by the mobile phone. Illustratively, the interface 406 includes a LUT template 407. The LUT template 407 includes, in order from left to right, an LUT icon, an LUT, and an "x" flag. Note that the LUT icon is an icon of the LUT control 203. The LUT included in the LUT template 407 may be, for example, LUT8, specifically, the LUT matched with the preview image after the preview image is identified by the AI model, which is not limited in this embodiment of the present application.
By way of example, LUT8 may be named "morning light," or other suitable name, to which embodiments of the present application are not limited.
In some embodiments, as shown in fig. 11 (1), if the user does not need an LUT to which the AI model matches, the handset may respond to the user's operation of the "x" flag, and the handset displays an interface 408 as shown in fig. 11 (2). The interface 408 exits the LUT template 407 for the handset. And simultaneously, the mobile phone adopts a default LUT to process the preview image of the preview interface. In general, the default LUT is LUT1.
It should be noted that, before the mobile phone does not start the AI model, after the mobile phone enters the movie mode, the mobile phone adopts a default LUT to process the preview image of the preview interface.
It will be appreciated that the LUT templates 407 shown in (3) in fig. 10 and the LUT templates 407 shown in (1) in fig. 11 are LUT templates for preview image matching after the preview images are identified for the AI model. In some embodiments, LUT template 407 includes only one LUT (e.g., LUT 8). In other embodiments, if the scene of the preview image is more complex (e.g., the scene of the preview image includes a person scene, a pet scene, a scenery, etc.), the LUT template 407 may include two or more LUTs (e.g., LUT1, LUT2, and LUT 8). In fig. 10 (3) and 11 (1), the LUT template 407 includes only one LUT as an example.
It should be appreciated that when the user initiates the AI model recognition preview image, if the 4K HDR control 205 is in an on state, the LUT template 407 to which the AI model is matched for the preview image includes one or more of LUT1, LUT2, LUT3, LUT 8. If 4K HDR control 205 is in an off state, LUT templates 407 to which the AI model matches for the preview image include one or more of LUT9, LUT10, LUT11, LUT 16.
In some embodiments, after the AI model matches to the LUT template corresponding to the preview image, the AI model may automatically select the LUT included in the LUT template to process the preview image. For example, when the AI model matches that the LUT template 407 corresponding to the preview image includes LUT8, the AI model may automatically select LUT8 to process the preview image. For another example, when the AI model is matched to the LUT template 407 corresponding to the preview image including LUT1, LUT2, and LUT8, the AI model may automatically select the LUT1 arranged at the first bit according to the arrangement order of the LUTs to process the preview image.
In other embodiments, after the AI model is matched to the LUT template corresponding to the preview image, the LUT included in the LUT template is selected by the user to process the preview image. For example, when the AI model is matched to the LUT template 407 corresponding to the preview image including the LUT8, the user selects the LUT8 to process the preview image. For another example, when the AI model is matched to the LUT template 407 corresponding to the preview image including LUT1, LUT2, and LUT8, the user can select a corresponding LUT (e.g., LUT 8) to process the preview image according to the scene, brightness, or user's own preference of the preview image.
When the user selects the LUT included in the LUT template himself, in some embodiments, as shown in (1) in fig. 12, in response to the user's operation of the LUT control 203, the handset displays an interface 409 as shown in (2) in fig. 12, the interface 409 including the LUT template 410. The LUT template 410 includes all LUTs. For example, LUT template 410 includes LUT1, LUT2, LUT3, LUT8. Alternatively, LUT template 410 includes LUT9, LUT10, LUT11, LUT16. The user may then select the LUT included in LUT template 410 to process the preview image. For example, the user may select according to the scene, brightness of the preview image, or preference of the user himself.
Note that when 4K HDR control 205 is in an unopened state, LUT template 410 includes LUT1, LUT2, LUT3, LUT8. When 4K HDR control 205 is in an on state, LUT template 410 includes LUT9, LUT10, LUT11, LUT16. Fig. 12 (2) illustrates an example in which the 4K HDR control 205 is in an unopened state, and the LUT template 410 includes LUT1, LUT2, LUT3, and LUT8.
In the case where the cell phone camera initiates the AI model, in some embodiments, the AI model may periodically identify the preview image and match the LUT corresponding to the preview image. For example, the AI model may identify preview images once every preset time period; the preset duration may be, for example, 3 seconds, 5 seconds, or 10 seconds. Of course, the preset duration may be other suitable durations, which are not limited in this embodiment of the present application. Considering that if the scene of the video image collected by the camera is frequently switched, the AI model always identifies the video image and matches with the LUT corresponding to the video image, thereby affecting the video shooting effect. Based on this, in some embodiments, the AI model periodically recognizes the preview image before the handset begins recording video. When the mobile phone starts recording, the AI model stops identifying, so that the problem that the LUT frequently recommends and influences the video shooting effect in the video recording process of the mobile phone can be avoided.
In some embodiments, when a preview image changes, the AI model automatically switches the LUT corresponding to the changed preview image. For example, when the preview image is changed from "person scene" to "landscape scene", the AI model may switch the LUT from "LUT1" to "LUT5".
In other embodiments, the preview interface of the handset further includes indication information when the preview image changes. The indication information is used to prompt the user whether to switch the LUT. For example, after the AI model recognizes that the preview image has changed (e.g., from a person scene to a landscape scene), the handset sends instruction information to the user, which is displayed in the preview interface of the handset. The indication information may be an icon, or text, for example.
For example, before the preview image is changed, the mobile phone displays an interface 501 as shown in fig. 13 (1). The scene of the preview image in the interface 501 is a person scene, and the LUT for the AI model matching the preview image is LUT2. Accordingly, after the preview image is changed, the mobile phone displays an interface 502 as shown in fig. 13 (2). The scene of the preview image in the interface 502 is a landscape scene, and the instruction information displayed in the interface 502 is an LUT icon (e.g., LUT 5) 503. In some embodiments, LUT icon 503 may be displayed in interface 502 for a period of time (e.g., 10 seconds), and in response to a user selecting LUT icon 503, the handset switches LUT2 to LUT5; if the user does not operate LUT icon 503 within 10 seconds, the user is instructed to refuse to switch, at which point the handset may continue to process the preview image using LUT2. In other embodiments, LUT icon 503 may be displayed in an interface 502 in a persistent dynamic state, with the cell phone switching LUT2 to LUT5 in response to a user selection of LUT icon 503; in response to the user's operation of the blank portion of interface 502, the handset exits the display of LUT icon 503, indicating that the user refuses to switch, at which point the handset may continue to process the preview image in accordance with LUT2. The blank portion in the interface 502 refers to a portion of the interface 502 other than the LUT icon 503.
For another example, before the preview image is changed, the mobile phone displays an interface 504 as shown in fig. 14 (1). The scene of the preview image in the interface 504 is a person scene and the LUT for the AI model matching the preview image is LUT2. Accordingly, after the preview image is changed, the mobile phone displays an interface 505 as shown in fig. 14 (2). The scene of the preview image in the interface 505 is a landscape, and the instruction information displayed in the interface 505 is text information. For example, the text information is "LUT5 is more suitable for the current shooting scene, and whether to switch to LUT5". In some embodiments, if the user selects the "ok" control, the handset switches LUT2 to LUT5 and uses LUT5 to process the preview image. If the user selects the cancel control, the mobile phone still adopts LUT2 to process the preview image.
In this embodiment, after the AI model recognizes that the preview image changes, the mobile phone may send instruction information to the user to prompt the user whether to switch the LUT, so as to improve the user experience while enriching the shooting style or effect.
In some embodiments, the handset displays interface 601 shown in fig. 15 (1). The interface 601 is a preview interface of the cell phone camera before recording. In response to the user's operation of the virtual shutter key 602, the cellular phone displays an interface 603 as shown in (2) in fig. 15. The interface is the interface when the mobile phone camera starts recording. Wherein the virtual shutter key 602 is a key for starting video recording and ending video recording
It should be noted that, in the foregoing embodiments, a new movie mode is mainly taken as an example in the mobile phone camera, and other modes may be newly added in the mobile phone camera at present, where the other modes may also adopt the first function and the second function to achieve the technical effects in the embodiments of the present application, which are not repeated herein.
In some embodiments, the software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, or a cloud architecture. In this embodiment, taking a layered architecture Android system as an example, a software structure of the electronic device 100 is illustrated.
Fig. 16 is a software structure diagram of an electronic device according to an embodiment of the present application.
It will be appreciated that the layered architecture divides the software into several layers, each with a clear role and division. The layers communicate with each other through a software interface. In some embodiments, the Android system may include an application layer (APP), a framework layer (FWK), a hardware abstraction layer (hardware abstraction layer, HAL), and a kernel layer (kernel). In some embodiments, the handset also includes hardware (e.g., a display screen).
Exemplary such application layers may include a User Interface (UI) layer and a logic layer. As shown in fig. 16, the UI layer includes cameras, gallery, and other applications. Wherein the camera includes a LUT control (e.g., LUT control 203 in the above embodiments), a 4K HDR control (e.g., 4K HDR control 205 in the above embodiments), and an AI settings item (e.g., AI movie hue in the above embodiments). The logic layer comprises an LUT template module, an encoding module, an LUT control module, an AI recommending module, an HDR module, a configuration library and the like.
The hardware abstraction layer is an interface layer between the kernel layer and the hardware, and can be used for abstracting the hardware. Illustratively, as shown in FIG. 16, the hardware abstraction layer includes a camera interface.
The kernel layer provides the bottom layer drive for various hardware of the mobile phone. Illustratively, as shown in FIG. 16, the kernel layer includes a camera driver module.
The framework layer provides application programming interfaces (application programming interface, APIs) and programming services for application programs of the application layer. The framework layer includes some predefined functions. The framework layer provides programming services to application layer calls through the API interface. It should be noted that, in the embodiment of the present application, the programming service may be, for example, a camera service (camera service). In some embodiments, as shown in fig. 16, the framework layer includes a camera service framework and a media framework. Wherein the media framework includes an encoder.
In a possible implementation manner, when the user selects the LUT to shoot according to different shooting scenes, the LUT template module is used to receive a start instruction of the LUT control and call the LUT template as shown in fig. 16. The HDR template is used for receiving a starting instruction of the 4K HDR control and starting the 4K HDR function. Illustratively, when 4K HDR is in an unopened state, the LUT templates invoked by the LUT template module include LUT1, LUT2, LUT3, LUT8; when 4K HDR is in an on state, the LUT templates called by the LUT template module include LUT9, LUT10, LUT11, LUT16. The LUT control module is used for receiving an instruction for determining a target LUT and calling a camera interface of the hardware abstraction layer to send a first instruction to the camera driving module; wherein the first instruction includes an identification of the target LUT; the first instruction is for instructing acquisition of a preview image of a camera preview interface. And the camera driving module acquires a preview image of the camera preview interface according to the first instruction, and invokes the camera interface of the hardware abstraction layer to send the preview image to the display screen so that the display screen displays the preview image. The preview image is an image with LUT effect.
The LUT template is a template pre-stored in the electronic device, and after the user clicks the LUT control, the LUT template module invokes the pre-stored template and displays the template on a display screen (for example, in a preview interface of the camera). In some embodiments, the internal logic of the handset application layer further includes a configuration library in which the LUT module is pre-stored.
Note that, when the target LUT selected by the user is LUT2, the first instruction includes the identification of the target LUT as 2.
When a user starts to record video, the camera driving module calls a camera interface of the hardware abstraction layer to report video frames of recorded video images to the coding module, so that the coding module codes according to the video frames. When the user finishes recording, the coding module stores the coded video image. The video image is an image with LUT effect, so the video frame is a video frame with LUT effect. For example, a video frame may include 1 to N frames; where 1 refers to the first frame at the beginning of recording and N refers to the last frame at the end of recording. In some embodiments, when a user starts recording, the camera driving module invokes the camera interface to report the 1 to N frames included in the video frame to the encoding module in sequence, and the encoding module encodes the 1 to N frames in sequence. And then, when the user finishes recording, the coding module stores the coded video frames of 1 to N frames, so that a video image with the LUT effect is obtained.
In some embodiments, when a user turns on the 4K HDR function, the photographing method provided in the embodiments of the present application will be described in detail with reference to the schematic structural diagram shown in fig. 19. Illustratively, the encoding module receives a start instruction for a 4K HDR control, setting parameters of an encoder in a media frame to 10 bits. Meanwhile, the camera service framework receives a starting instruction of the 4K HDR control and calls a camera interface of the hardware abstraction layer to send a first instruction to the camera driving module. And the camera driving module acquires a preview image of the camera preview interface according to the first instruction. In some embodiments, the camera driver module invokes a camera interface of the hardware abstraction layer to send the preview image onto the display screen to cause the display screen to display the preview image. The preview image is a preview image with a 10bit LUT effect. In other embodiments, the encoding module receives an instruction to start recording, the camera driving module invokes a camera interface of the hardware abstraction layer to send video frames of the recorded video image to the encoding module, and the encoding module invokes an encoder of a preset 10bit parameter in the media frame to encode the video frames of the video image and stores the encoded video image. Wherein, the video image is a video image with a 10bit LUT effect.
In another possible implementation manner, when the user starts the AI mode and the AI model selects the LUT to shoot according to different shooting scenes, the AI recommendation module receives a start instruction of the AI model and invokes the camera interface to acquire the scene and brightness of the preview image from the camera driving module, as shown in fig. 16. The camera driver module invokes the camera interface to send the scene and brightness of the preview image to the AI recommendation module. The AI recommendation module identifies the scene and brightness of the preview image and matches the LUT template corresponding to the preview image.
In some embodiments, the AI recommendation module automatically selects a target LUT and sends the selected target LUT to the LUT control module. The LUT control module calls a camera interface of the hardware abstraction layer to send a first instruction to the camera driving module. The first instruction includes an identification of a target LUT, and the first instruction is used for indicating acquisition of a preview image of a camera preview interface. And the camera driving module acquires a preview image of the camera preview interface according to the first instruction, and invokes the camera interface of the hardware abstraction layer to send the preview image to the display screen so that the display screen displays the preview image. The preview image is an image with LUT effect.
In other embodiments, when the user selects the target LUT himself, the LUT control module receives an instruction to determine the target LUT, invokes the camera interface of the hardware abstraction layer to send a first instruction to the camera driver module. And the camera driving module acquires a preview image of the camera preview interface according to the first instruction, and invokes the camera interface of the hardware abstraction layer to send the preview image to the display screen so that the display screen displays the preview image. The preview image is an image with LUT effect.
It should be noted that, the foregoing embodiments may be referred to for an illustration from when the user starts recording video to when the user ends recording video, which is not repeated here. In addition, reference may be made to the above embodiments for an illustration of the user after turning on the 4K HDR function, which is not described in detail herein.
In some embodiments, the AI recommendation module may identify the preview image periodically (i.e., every preset time period). If the scene and brightness of the preview image change, the AI recommendation module re-matches the LUT template corresponding to the preview image. The description of the identification of the preview scene by the AI recommendation module and the description of the matching of the LUT template corresponding to the preview image by the AI identification module may refer to the above embodiments, and will not be repeated herein.
In some embodiments, between the framework layer and HAL layer shown in fig. 16, a system library and runtime may also be included.
In the embodiment of the application, when the electronic device enters the camera application and is in the movie mode, the electronic device may execute the shooting method provided in the embodiment of the application. By way of example, taking the electronic device in fig. 16 as an example, a specific procedure of the photographing method is described, and in some embodiments, the photographing method may include steps S1 to S12 as shown in fig. 16 and 17.
S1, the electronic equipment receives a starting instruction of the LUT control through the LUT template module.
Illustratively, the electronic device generates the launch instruction in response to a user operation of the LUT control. And then, the electronic equipment sends a starting instruction of the LUT control to the LUT template module so that the LUT template module receives the starting instruction of the LUT control. The operation may be a voice operation, a touch operation, a gesture operation, or the like. The touch operation may be, for example, a click operation, a slide operation, or the like. The embodiments of the present application are not limited in this regard.
Correspondingly, after the LUT template module receives a starting instruction of the LUT control, the LUT template module calls the LUT template. Wherein the LUT template is pre-stored in the electronic device. For example, as shown in fig. 16, LUT templates may be stored in a configuration library. The LUT template includes a first LUT template and a second LUT template. It should be appreciated that when 4K HDR is in an unopened state, the LUT template invoked by the LUT template module is the first LUT template; the first LUT template includes LUT1, LUT2, LUT3, LUT8. When the 4K HDR is in an on state, the LUT template called by the LUT template module is a second LUT template; the second LUT template includes LUT9, LUT10, LUT11, LUT16.
In some embodiments, when 4K HDR is in an unopened state, as shown in fig. 17, the photographing method provided in the embodiments of the present application includes:
s2a, the electronic equipment calls a first LUT template through the LUT template module.
S3a, the electronic equipment sends a first LUT template to the display screen through the LUT template module.
S4a, the electronic equipment displays the first LUT template through a display screen.
In other embodiments, when the 4K HDR is in an on state, the electronic device receives, by the HDR module, a start instruction of the 4K HDR control, and sends the start instruction of the 4K HDR control to the LUT template module. Correspondingly, after receiving a start instruction of a 4K HDR control, as shown in fig. 17, the LUT template module provides a shooting method including:
s2b, the electronic equipment calls a second LUT template through the LUT template module.
S3b, the electronic equipment sends a second LUT template to the display screen through the LUT template module.
S4b, the electronic equipment displays the second LUT template through the display screen.
As also shown in fig. 17, when the 4K HDR is in the on state, the electronic device receives, through the HDR module, the start instruction of the 4K HDR control, and simultaneously sends, to the encoding module, the start instruction of the 4K HDR control. After the encoding module receives the start command of the 4KHDR control, the parameters of the encoder in the media frame are set to 10 bits.
With reference to any one of the foregoing embodiments, as shown in fig. 17, the photographing method provided in the embodiment of the present application further includes:
s5, the electronic equipment selects a target LUT through the LUT template module.
Illustratively, the electronic device selects the target LUT in response to a user operation of one of the LUTs in the LUT template.
It is understood that when the display screen of the electronic device displays the first LUT template, the target LUT selected by the LUT template module is one of the LUTs of the first LUT template. When the display screen of the electronic equipment displays the second LUT template, the target LUT selected by the LUT template module is one of the LUT of the second LUT template.
Taking the LUT template as the first LUT template as an example, the electronic device selects LUT2, that is, LUT2 as the target LUT, in response to the operation of the LUT2 in the first LUT template by the user. In some embodiments, after the LUT template module selects the target LUT, the electronic device sends the target LUT to the display screen via the LUT template module to cause the display screen to display the target LUT. In other embodiments, after the LUT template module selects the target LUT, the electronic device sends the target LUT to the LUT control module via the LUT template module.
S6, the electronic equipment sends the identification information of the target LUT to the camera driving module through the LUT control module. Illustratively, the LUT control module invokes the camera interface of the hardware abstraction layer to send the identification information of the target LUT to the camera driver module.
In some embodiments, when the target LUT is LUT2, the identification of the target LUT is 2.
S7, the electronic equipment acquires a preview image of the preview interface through the camera driving module according to the identification of the target LUT.
In some embodiments, when the 4K HDR is in an unopened state, the camera driver module obtains a preview image of the camera preview interface according to the identification information of the target LUT. It should be appreciated that the preview image is a preview image having an 8bit lut effect.
In other embodiments, when the 4K HDR is in an on state, the electronic device receives, through the HDR module, a start instruction of the 4K HDR control, and simultaneously sends the start instruction of the 4K HDR control to the camera service framework. And after receiving the starting instruction of the 4K HDR control, the camera service framework calls a camera interface of the hardware abstraction layer to send the identification information of the target LUT to the camera driving module. And the camera driving module acquires a preview image of the camera preview interface according to the identification information of the target LUT. It should be appreciated that the preview image is a preview image having a 10bit lut effect.
Correspondingly, after the camera driving module acquires the preview image, the preview image is sent to the display screen, so that the display screen displays the preview image. Illustratively, the camera driver module invokes the camera interface of the hardware abstraction layer to send the preview image to the display screen. It should be appreciated that the preview image is an image having a target LUT effect.
S8, the electronic equipment receives a recording starting instruction through the coding module.
Illustratively, the electronic device generates a start recording instruction in response to a user operation of the virtual shutter key. And then, the electronic equipment sends a recording starting instruction to the coding module so that the coding module receives the recording starting instruction. The virtual shutter key is a key for starting video recording and ending video recording. For example, as shown in (1) in fig. 15 in connection with the above embodiment, the virtual shutter key may be, for example, a control 602.
And S9, the electronic equipment sends the video frames of the recorded video images to the coding module through the camera driving module.
Illustratively, the camera driver module invokes the camera interface of the hardware abstraction layer to report video frames of the recorded video images to the encoding module.
S10, the electronic equipment encodes the video frames of the video image through the encoding module.
The video image is an image with a target LUT effect, so the video frame is a video frame with a target LUT effect. Illustratively, the video frames may include 1-N frames; where 1 refers to the first frame at the beginning of recording and N refers to the last frame at the end of recording. In some embodiments, the camera driver module invokes the camera interface to report the 1-N frames included in the video frame to the encoding module in sequence, and the encoding module encodes the 1-N frames in sequence.
In some embodiments, when the 4K HDR control is in an on state, the camera driver module invokes a camera interface of the hardware abstraction layer to send video frames of the recorded video image to the encoding module, and the encoding module invokes an encoder of a preset 10bit parameter in the media frame to encode the video frames of the video image and stores the encoded video image. Wherein, the video image is a video image with a 10bit LUT effect.
S11, the electronic equipment receives the recording ending instruction through the coding module.
Illustratively, the electronic device generates an end recording instruction in response to a user operation of the virtual shutter key. And then, the electronic equipment sends a recording ending instruction to the coding module so that the coding module receives the recording ending instruction.
S12, the electronic equipment stores the video image through the coding module.
As shown in connection with fig. 16, the electronic device may save the video image to a gallery of an application layer through an encoding module, for example.
In the embodiment of the application, since the film mode is added in the camera and comprises the LUT function and the 4K HDR function, when the user shoots in the film mode using the camera, the LUT function and the 4K HDR function can be selected to be started according to the scene, the brightness of the preview image or the preference of the user; when the user selects the LUT function, shooting effects or styles corresponding to different LUTs can be displayed, when the user selects the 4K HDR function simultaneously, the definition of the shot image is higher and more vivid, and therefore the styles or effects of the image shot by the electronic equipment can be enriched, and the shooting requirements of current user diversification are met.
In other embodiments, as shown in fig. 16 and 18, the photographing method may further include steps A1 to a17.
And A1, the electronic equipment receives a starting instruction of an AI model through an AI recommendation module.
Illustratively, the electronic device generates a startup instruction of the AI model in response to a user operation of the "AI movie hue" setting item. And then, the electronic equipment sends an AI model starting instruction to the AI recommendation module so that the AI recommendation module receives the AI model starting instruction.
And A2, the electronic equipment sends an instruction for acquiring the scene and the brightness of the preview image to the camera driving module through the AI recommendation module.
Illustratively, the AI recommendation module invokes the camera interface of the hardware abstraction layer to send instructions to the camera driver module to obtain the scene and brightness of the preview image.
And step A3, the electronic equipment returns the scene and brightness of the preview image through the camera driving module.
Illustratively, the camera driver module invokes the camera interface of the hardware abstraction layer to return the scene and brightness of the preview image to the AI recommendation module.
As shown in fig. 18, in some embodiments, when 4K HDR is in an unopened state, the photographing method provided in the embodiments of the present application includes:
And A4a, the electronic equipment identifies the scene and the brightness of the preview image through the AI recommendation module and matches the first LUT corresponding to the preview image.
In some embodiments, the AI recommendation module identifies the brightness of the preview image based on the first preset tag. The first preset label is used for representing the tone corresponding to different gray scale intervals. Illustratively, the first preset label includes black, shadows, middle tones, bright portions, and highlights. For example, the gradation interval corresponding to black is 0 to 33, the gradation interval corresponding to shading is 34 to 95, the gradation interval corresponding to intermediate tone is 96 to 169, the gradation interval corresponding to bright portion is 170 to 224, and the gradation interval corresponding to highlight is 225 to 255.
The above-described division of the gradation intervals corresponding to the respective hues is merely an example, and does not limit the embodiments of the present application.
In the embodiment of the present application, the highlight refers to an object whose preview image has white or near white. Such as lights, sun or highlights of slippery objects. The bright portion refers to a light color with details or texture of the preview image. Such as a light-colored garment, a wall, or two parts of a human face. Black refers to the preview image being black or near black. For example, an unlit portion.
For example, the AI recommendation module may identify the preview image according to the first preset label, and determine a tone corresponding to the preview image. For example, when the luminance value (may also be referred to as a gray value) of the preview image is 32, that is, the luminance value of the preview image is located in the gray scale interval 0 to 33, the AI recommendation module recognizes that the tone corresponding to the preview image is black. When the luminance value of the preview image is 175, that is, the luminance value of the preview image is located in the gray scale interval 170 to 224, the AI recommendation module recognizes that the corresponding image of the preview image is bright.
In some embodiments, the AI recommendation module may further identify the brightness of the preview image based on a second preset label. The second preset label is used for representing the exposure of the image. Illustratively, exposure refers to the ratio of black to high light. For example, when the black ratio is less than or equal to 5%, it means that the exposure amount of the preview image is high (i.e., overexposure). When the black ratio is greater than 5% and the highlight ratio is greater than or equal to 10%, it means that the exposure amount of the preview image is high (i.e., bright). When the black ratio is more than 5% and the highlight ratio is less than 10%, it means that the exposure amount of the preview image is normal (i.e., balanced).
For example, in combination with the above embodiment, when the AI recommendation module identifies that the preview image is highlighted, if the black ratio of the preview image is less than or equal to 5% on this basis, the AI recommendation module identifies that the preview image is overexposed, that is, that the preview image is highlighted and overexposed. If the black proportion of the preview image is greater than 5% and the highlight proportion is greater than or equal to 10%, the preview image is bright, that is, the AI recommendation module recognizes that the preview image is highlight and bright.
In some embodiments, the AI recommendation module may further identify a scene of the preview image according to a third preset tag. The third preset label is used for indicating scene information of the preview image. Exemplary scene information includes figures, delicacies, and the like. Illustratively, a portrait refers to a preview image including the five sense organs of a person; or the preview image may include more than 50% of the five sense organ portion of the total image. What the index finger is, is that the preview image includes food (e.g., coffee, bread, etc.).
For example, the AI recommendation module may process the preview image using image processing techniques and, if it is identified that the preview image includes the five sense organs of a person, indicate that the scene of the preview image is a portrait. If the preview image is recognized to include food, the scene of the preview image is shown as food.
In this embodiment of the present application, the AI recommendation module may identify the scene and brightness of the preview image in combination with the first preset tag, the second preset tag, and the third preset tag, so as to match the LUT corresponding to the preview image. Taking the example that the AI recommendation module recognizes that the tone of the preview image is highlight according to the first preset label, referring to the following table 2, the correspondence between the second preset label and the corresponding relation between the third preset label and the LUT is described.
TABLE 2
Figure GDA0004122828730000231
Note that, the correspondence between the second preset tag and the third preset tag shown in table 2 and the LUT template is only an example of the present application, and is not limited to the embodiments of the present application.
It should be understood that in this embodiment, the LUT template corresponding to the preview image matched by the AI recommendation module may include only one LUT, or may include two or more LUTs, which is not limited in this embodiment of the present application. In addition, when the 4K HDR is in an unopened state, the LUT template includes LUTs of LUT1, LUT2, LUT3, & gt, LUT 8; when 4K HDR is in an on state, the LUT templates include LUTs that are LUTs of LUT9, LUT10, LUT 11.
And step A5a, the electronic equipment sends the first LUT to the display screen through the AI recommendation module.
And step A6a, the electronic equipment displays the first LUT through a display screen.
As also shown in fig. 18, in other embodiments, when the 4K HDR is in an on state, the electronic device receives, through the HDR module, an activation instruction of the 4K HDR control, and sends, to the AI recommendation module, the activation instruction of the 4K HDR control. Correspondingly, after the AI recommendation module receives a starting instruction of the 4K HDR control, the shooting method provided by the embodiment of the application includes:
and step A4b, the electronic equipment identifies the scene and the brightness of the preview image through the AI recommendation module and matches a second LUT corresponding to the preview image.
And step A5b, the electronic equipment sends a second LUT to the display screen through the AI recommendation module.
And step A6b, the electronic equipment displays the second LUT through the display screen.
It should be noted that, for the illustration of step A4b, step A5b and step A6b, reference may be made to the illustration of step A4a, step A5a and step A6a in the above embodiments, and the details will not be described here.
With reference to any one of the foregoing embodiments, the photographing method provided in the embodiment of the present application further includes:
and A7, the electronic equipment receives a starting instruction of the LUT control through the LUT template module.
Illustratively, the electronic device generates the launch instruction in response to a user operation of the LUT control. And then, the electronic equipment sends a starting instruction of the LUT control to the LUT template module so that the LUT template module receives the starting instruction of the LUT control.
Correspondingly, after the LUT template module receives a starting instruction of the LUT control, the LUT template module calls the LUT template. Wherein the LUT template is pre-stored in the electronic device. For example, as shown in fig. 16, LUT templates may be stored in a configuration library. The LUT template includes a first LUT template and a second LUT template. It should be appreciated that when 4K HDR is in an unopened state, the LUT template invoked by the LUT template module is the first LUT template; the first LUT template includes LUT1, LUT2, LUT3, LUT8. When the 4K HDR is in an on state, the LUT template called by the LUT template module is a second LUT template; the second LUT template includes LUT9, LUT10, LUT11, LUT16.
In some embodiments, when 4K HDR is in an unopened state, as shown in fig. 18, the photographing method provided in the embodiments of the present application includes:
and step A8a, the electronic equipment calls a first LUT template through the LUT template module.
And step A9a, the electronic equipment sends a first LUT template to the display screen through the LUT template module.
Accordingly, the electronic device displays the first LUT template through the display screen.
In other embodiments, when the 4K HDR is in an on state, the electronic device receives, by the HDR module, a start instruction of the 4K HDR control, and sends the start instruction of the 4K HDR control to the LUT template module. Accordingly, after the LUT template module receives the start instruction of the 4K HDR control, as shown in fig. 18, the shooting method provided in the embodiment of the present application includes:
And step A8b, the electronic equipment calls a second LUT template through the LUT template module.
And step A9b, the electronic equipment sends a second LUT template to the display screen through the LUT template module.
Accordingly, the electronic device displays the second LUT template via the display screen.
As also shown in fig. 18, when the 4K HDR is in the on state, the electronic device receives, through the HDR module, a start instruction of the 4K HDR control, and simultaneously sends, to the encoding module, the start instruction of the 4K HDR control. After the encoding module receives the start command of the 4KHDR control, the parameters of the encoder in the media frame are set to 10 bits.
With reference to any one of the foregoing embodiments, as shown in fig. 18, the photographing method provided in the embodiment of the present application further includes:
and A10, the electronic equipment selects a target LUT through the LUT template module.
Illustratively, the electronic device selects the target LUT in response to a user operation of one of the LUTs in the LUT template.
It is understood that when the display screen of the electronic device displays the first LUT template, the target LUT selected by the LUT template module is one of the LUTs of the first LUT template. When the display screen of the electronic equipment displays the second LUT template, the target LUT selected by the LUT template module is one of the LUT of the second LUT template.
Taking the LUT template as the first LUT template as an example, the electronic device selects LUT2, that is, LUT2 as the target LUT, in response to the operation of the LUT2 in the first LUT template by the user. In some embodiments, after the LUT template module selects the target LUT, the electronic device sends the target LUT to the display screen via the LUT template module to cause the display screen to display the target LUT. In other embodiments, after the LUT template module selects the target LUT, the electronic device sends the target LUT to the LUT control module via the LUT template module.
And step A11, the electronic equipment sends the identification information of the target LUT to the camera driving module through the LUT control module. Illustratively, the LUT control module invokes the camera interface of the hardware abstraction layer to send the identification information of the target LUT to the camera driver module.
In some embodiments, when the target LUT is LUT2, the identification of the target LUT is 2.
And step A12, the electronic equipment acquires a preview image of the preview interface through the camera driving module according to the identification of the target LUT.
In some embodiments, when the 4K HDR is in an unopened state, the camera driver module obtains a preview image of the camera preview interface according to the identification information of the target LUT. It should be appreciated that the preview image is a preview image having an 8bit lut effect.
In other embodiments, when the 4K HDR is in an on state, the electronic device receives, through the HDR module, a start instruction of the 4K HDR control, and simultaneously sends the start instruction of the 4K HDR control to the camera service framework. And after receiving the starting instruction of the 4K HDR control, the camera service framework calls a camera interface of the hardware abstraction layer to send the identification information of the target LUT to the camera driving module. And the camera driving module acquires a preview image of the camera preview interface according to the identification information of the target LUT. It should be appreciated that the preview image is a preview image having a 10bit lut effect.
Correspondingly, after the camera driving module acquires the preview image, the preview image is sent to the display screen, so that the display screen displays the preview image. Illustratively, the camera driver module invokes the camera interface of the hardware abstraction layer to send the preview image to the display screen. It should be appreciated that the preview image is an image having a target LUT effect.
And step A13, the electronic equipment receives a recording starting instruction through the coding module.
Illustratively, the electronic device generates a start recording instruction in response to a user operation of the virtual shutter key. And then, the electronic equipment sends a recording starting instruction to the coding module so that the coding module receives the recording starting instruction. The virtual shutter key is a key for starting video recording and ending video recording. For example, as shown in (1) in fig. 15 in connection with the above embodiment, the virtual shutter key may be, for example, a control 602.
And step A14, the electronic equipment sends the video frames of the recorded video images to the coding module through the camera driving module.
Illustratively, the camera driver module invokes the camera interface of the hardware abstraction layer to report video frames of the recorded video images to the encoding module.
And step A15, the electronic equipment encodes the video frames of the video images through an encoding module.
The video image is an image with a target LUT effect, so the video frame is a video frame with a target LUT effect. Illustratively, the video frames may include 1-N frames; where 1 refers to the first frame at the beginning of recording and N refers to the last frame at the end of recording. In some embodiments, the camera driver module invokes the camera interface to report the 1-N frames included in the video frame to the encoding module in sequence, and the encoding module encodes the 1-N frames in sequence.
In some embodiments, when the 4K HDR control is in an on state, the camera driver module invokes a camera interface of the hardware abstraction layer to send video frames of the recorded video image to the encoding module, and the encoding module invokes an encoder of a preset 10bit parameter in the media frame to encode the video frames of the video image and stores the encoded video image. Wherein, the video image is a video image with a 10bit LUT effect.
And step A16, the electronic equipment receives the recording ending instruction through the coding module.
Illustratively, the electronic device generates an end recording instruction in response to a user operation of the virtual shutter key. And then, the electronic equipment sends a recording ending instruction to the coding module so that the coding module receives the recording ending instruction.
And step A17, the electronic equipment stores the video image through the coding module.
As shown in connection with fig. 16, the electronic device may save the video image to a gallery of an application layer through an encoding module, for example.
In some embodiments, the AI recommendation module may periodically trigger recognition. When the AI recommendation module recognizes that the preview image has changed, the electronic device may re-execute steps A2-a 17.
In the embodiment of the application, the electronic equipment automatically identifies the preview image through the AI model and matches the LUT template corresponding to the preview image, so that the shooting style or effect of the electronic equipment is enriched, different shooting requirements of a current user are met, and further the shooting experience of the user is improved.
Fig. 20 is a schematic flow chart of a shooting method according to an embodiment of the present application; the method can be applied to electronic equipment comprising a camera and a display screen, wherein the electronic equipment is preset with a plurality of first color lookup tables (LUTs); the method comprises the following steps:
S101, the electronic equipment responds to the operation of starting the camera by a user, and a shooting preview interface is displayed.
Wherein the shot preview interface includes a first preview image.
Taking an electronic device as a mobile phone as an example, in some embodiments, the electronic device starting a camera application to shoot in response to an operation of starting the camera by a user refers to the electronic device.
For example, in combination with the above embodiment, the shooting preview interface may be an interface of the camera in the video mode, an interface of the camera in the professional mode, or an interface of the camera in the movie mode. When the shooting preview interface is an interface of the camera in a video mode; or when the camera is at the interface in the professional mode, the first preview image is a preview image which is not processed by the LUT; when the shooting preview interface is an interface of the camera in the film mode, the first preview image is a preview image processed by a default LUT.
S102, the electronic device determines a first target LUT.
Wherein the first target LUT is one of a plurality of first LUTs.
In some embodiments, the user may select the corresponding LUT according to different shooting scenarios, i.e. the first target LUT is determined by the user himself. In other embodiments, the user may select an AI model, identify a preview image included in the preview interface prior to recording the video of the cell phone from the AI model, and match a LUT corresponding to the preview image, i.e., the first target LUT is determined by the AI model of the electronic device.
S103, the electronic equipment processes the first preview image according to the first target LUT so as to obtain a second preview image.
Wherein the preview effect of the first preview image is different from the preview effect of the second preview image.
In some embodiments, the electronic device may convert the gray value of each pixel in the first preview image according to the identification of the first target LUT, thereby obtaining the second preview image.
In this embodiment, the electronic device may process, on the photographing preview interface, the first preview image of the photographing preview interface according to the first target LUT determined by the electronic device, to obtain the second preview image; because the preview effects of the first preview image and the second preview image are different, in the video recording process, shooting effects or styles corresponding to different LUTs can be displayed, the effects of the recorded video images can be enriched, and the shooting requirements of current users are met.
Fig. 21 is a schematic flow chart of another photographing method according to an embodiment of the present application; the method comprises the following steps:
s201, the electronic equipment responds to the operation of starting the camera by a user, views a first image, and displays a third preview image processed by the first image on a shooting preview interface.
Taking an electronic device as a mobile phone as an example, in some embodiments, the electronic device starting a camera application to shoot in response to an operation of starting the camera by a user refers to the electronic device. Wherein the first image may also be referred to as a viewfinder image.
In some embodiments, the electronic device may process the first image using a third target LUT to obtain a third preview image. The third target LUT is an exemplary target LUT determined by the electronic device identifying image information of the first image using the AI model. In combination with the above embodiment, when the first target LUT is determined by using the AI model for the electronic device, the third target LUT may be the same as the first target LUT; the third preview image may be the same as the second preview image.
S202, the electronic equipment views a view to obtain a second image, and a fourth preview image processed by the second image is automatically displayed on a shooting preview interface.
Wherein the preview effect of the third preview image is different from the preview effect of the fourth preview image.
For example, when an image of the viewing interface of the electronic device changes (e.g., from a first image to a second image), the electronic device may re-view (e.g., the electronic device views the second image). Wherein the second image may also be referred to as a viewfinder image.
In some embodiments, the electronic device may process the first image using a fourth target LUT to obtain a fourth preview image. The fourth target LUT is an exemplary target LUT determined by the electronic device identifying the image information of the second image using the AI model. In combination with the above embodiment, when the first target LUT is determined by using the AI model for the electronic device, the fourth target LUT may be the same as the second target LUT, and the fourth preview image is a changed preview image.
As shown in fig. 13 and 14, the third preview image may be, for example, a preview image shown in (1) in fig. 13 and (1) in fig. 14; the fourth preview image may be, for example, a preview image shown in fig. 13 (2) and fig. 14 (2).
In addition, still as shown in conjunction with fig. 13 and 14, the third target LUT may be, for example, LUT2 shown in (1) in fig. 13 and (1) in fig. 14; the fourth target LUT may be, for example, LUT5 shown in (2) in fig. 13 and (2) in fig. 14.
In this embodiment, the electronic device may process different images obtained by framing to obtain different preview images corresponding to the different images, so that when the framing image of the electronic device changes, the preview images corresponding to the framing image can be automatically matched, so as to enrich the preview effect of the preview image, further enrich the effect of the recorded video image, and meet the shooting requirement of the current user.
The embodiment of the application provides electronic equipment, which can comprise: a display screen (e.g., a touch screen), a camera, a memory, and one or more processors. The display, camera, memory and processor are coupled. The memory is for storing computer program code, the computer program code comprising computer instructions. When the processor executes the computer instructions, the electronic device may perform the functions or steps performed by the mobile phone in the above-described method embodiments. The structure of the electronic device may refer to the structure of the electronic device 100 shown in fig. 5.
Embodiments of the present application also provide a chip system, as shown in fig. 22, the chip system 1800 includes at least one processor 1801 and at least one interface circuit 1802.
The processor 1801 and interface circuit 1802 described above may be interconnected by wires. For example, interface circuit 1802 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, interface circuit 1802 may be used to send signals to other devices (e.g., processor 1801). The interface circuit 1802 may, for example, read instructions stored in a memory and send the instructions to the processor 1801. The instructions, when executed by the processor 1801, may cause the electronic device to perform the steps performed by the handset 180 in the above embodiments. Of course, the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
The embodiment of the application also provides a computer storage medium, which comprises computer instructions, when the computer instructions run on the electronic device, the electronic device is caused to execute the functions or steps executed by the mobile phone in the embodiment of the method.
The present application also provides a computer program product, which when run on a computer, causes the computer to perform the functions or steps performed by the mobile phone in the above-mentioned method embodiments.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. A photographing method applied to an electronic device comprising a camera and a display screen, the method comprising:
the electronic equipment responds to the operation of starting the camera by a user, a shooting preview interface is displayed, the shooting preview interface comprises a first preview image and further comprises a high dynamic range image HDR control, wherein a color lookup table LUT template displayed by the shooting preview interface under the condition that the HDR control is closed is a first LUT template, a LUT template displayed by the shooting preview interface under the condition that the HDR control is opened is a second LUT template, the first LUT template is an 8bit LUT template, and the second LUT template is a 10bit LUT template;
the electronic equipment receives a first selection operation of the user under the condition that the LUT template presented by the shooting preview interface is a first LUT template;
The electronic equipment responds to the first selection operation and determines a first target LUT from a first LUT template;
the electronic equipment processes the first preview image based on the first target LUT to obtain a second preview image, and the preview effect of the second preview image is different from that of the first preview image;
receiving a second selection operation of the user by the electronic equipment under the condition that the LUT template presented by the shooting preview interface is a second LUT template;
the electronic equipment responds to the second selection operation and determines a second target LUT from a second LUT template;
the electronic equipment processes the first preview image based on the second target LUT to obtain a third preview image, and the preview effect of the third preview image is different from that of the first preview image;
the electronic equipment responds to the operation of a user on a shooting control, and records a video image, wherein the video image is obtained by processing the image acquired by the camera by the electronic equipment according to the first target LUT or the second target LUT;
wherein the method further comprises:
The electronic device determines a first target LUT comprising:
the electronic equipment operates a preset AI model according to a first preset label, a second preset label and a third preset label to identify the image information of the first preview image; the first preset label is used for representing tone corresponding to different gray scale intervals; the second preset label is used for representing the exposure of the first preview image; the third preset tag is used for representing scene information of the first preview image; the image information includes scene and brightness;
the electronic device identifies the scene and brightness of the first preview image based on the first preset label, the second preset label and the third preset label, so as to determine the first target LUT matched with the first preview image.
2. The method of claim 1, wherein the shooting preview interface is an interface of the electronic device in a movie mode; the film mode is a mode that the electronic equipment records video according to the first target LUT or the second target LUT; the electronic equipment responds to the operation of starting the camera by a user, displays a shooting preview interface, and comprises the following steps:
The electronic equipment responds to the operation of starting the camera by a user, and a first interface is displayed; the first interface comprises a first control, and the first control is used for indicating a first application calling the camera to enter a film mode;
the electronic equipment responds to the operation of a user on the first control, and a shooting preview interface is displayed; the first preview image is an image obtained by processing an image acquired by the camera according to a default LUT, wherein the default LUT is a preset LUT in the first LUT template or the second LUT template.
3. The method of claim 2, wherein the capture preview interface includes first indication information for indicating to a user to adjust the state of the display screen to a landscape state.
4. The method of claim 3, wherein the capture preview interface further comprises a LUT control; the electronic device determines a first target LUT comprising:
the electronic equipment responds to the operation of the LUT control by a user and displays the first LUT template;
the electronic device determines the first target LUT in response to a user operation on one of the LUTs in the first LUT template.
5. The method according to claim 1, wherein the method further comprises:
the electronic device is responsive to user operation of the HDR control to display the second LUT template if the HDR control is open.
6. The method of claim 1, wherein the electronic device processing the first preview image according to the first target LUT to obtain a second preview image, comprising:
the electronic equipment acquires identification information of the first target LUT;
the electronic equipment acquires pixel information of the first preview image; the pixel information comprises gray values of each pixel in the first preview image;
and the electronic equipment converts the pixel information according to the identification information so as to obtain a second preview image.
7. The method of claim 4, wherein the electronic device comprises a LUT template module, the electronic device displaying the first LUT template in response to a user operation of the LUT control, comprising:
the LUT template module receives a starting instruction of the LUT control;
the LUT template module responds to a starting instruction of the LUT control and calls a first LUT template;
And the LUT template module sends the first LUT template to the display screen.
8. An electronic device comprising a memory, a display, one or more cameras, and one or more processors; the display screen is configured to display an image captured by the camera or an image generated by the processor, and the memory stores computer program code comprising computer instructions that, when executed by the processor, cause the electronic device to perform the method of any of claims 1-7.
9. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-7.
CN202110926567.1A 2021-08-12 2021-08-12 Shooting method and electronic equipment Active CN113810602B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110926567.1A CN113810602B (en) 2021-08-12 2021-08-12 Shooting method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110926567.1A CN113810602B (en) 2021-08-12 2021-08-12 Shooting method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113810602A CN113810602A (en) 2021-12-17
CN113810602B true CN113810602B (en) 2023-07-11

Family

ID=78893519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110926567.1A Active CN113810602B (en) 2021-08-12 2021-08-12 Shooting method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113810602B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115914823A (en) * 2021-08-12 2023-04-04 荣耀终端有限公司 Shooting method and electronic equipment
CN114630045A (en) * 2022-02-11 2022-06-14 珠海格力电器股份有限公司 Photographing method and device, readable storage medium and electronic equipment
CN115564659B (en) * 2022-02-28 2024-04-05 荣耀终端有限公司 Video processing method and device
CN116708751B (en) * 2022-09-30 2024-02-27 荣耀终端有限公司 Method and device for determining photographing duration and electronic equipment
CN116723416B (en) * 2022-10-21 2024-04-02 荣耀终端有限公司 Image processing method and electronic equipment
CN116668866B (en) * 2022-11-21 2024-04-19 荣耀终端有限公司 Image processing method and electronic equipment
CN116668838B (en) * 2022-11-22 2023-12-05 荣耀终端有限公司 Image processing method and electronic equipment
CN116703692B (en) * 2022-12-30 2024-06-07 荣耀终端有限公司 Shooting performance optimization method and device
CN118555482A (en) * 2023-01-31 2024-08-27 荣耀终端有限公司 Photographing processing method and electronic equipment
CN117956290A (en) * 2023-12-22 2024-04-30 荣耀终端有限公司 Method for acquiring ambient light brightness, electronic equipment, storage medium and chip
CN118018861A (en) * 2024-03-06 2024-05-10 荣耀终端有限公司 Shooting preview method and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105323456A (en) * 2014-12-16 2016-02-10 维沃移动通信有限公司 Image previewing method for photographing device and image photographing device
CN111630836A (en) * 2018-03-26 2020-09-04 华为技术有限公司 Intelligent auxiliary control method and terminal equipment
CN112258544A (en) * 2020-10-26 2021-01-22 王海 Camera filter self-adaptive switching method and system based on artificial intelligence

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105812646A (en) * 2014-12-30 2016-07-27 Tcl集团股份有限公司 Shooting method, shooting device, image processing method, image processing device, and communication system
CN106375660A (en) * 2016-09-13 2017-02-01 乐视控股(北京)有限公司 Photographic processing method and device
CN109068056B (en) * 2018-08-17 2021-03-30 Oppo广东移动通信有限公司 Electronic equipment, filter processing method of image shot by electronic equipment and storage medium
CN111416950B (en) * 2020-03-26 2023-11-28 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment
CN112511750B (en) * 2020-11-30 2022-11-29 维沃移动通信有限公司 Video shooting method, device, equipment and medium
CN113194255A (en) * 2021-04-29 2021-07-30 南京维沃软件技术有限公司 Shooting method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105323456A (en) * 2014-12-16 2016-02-10 维沃移动通信有限公司 Image previewing method for photographing device and image photographing device
CN111630836A (en) * 2018-03-26 2020-09-04 华为技术有限公司 Intelligent auxiliary control method and terminal equipment
CN112258544A (en) * 2020-10-26 2021-01-22 王海 Camera filter self-adaptive switching method and system based on artificial intelligence

Also Published As

Publication number Publication date
CN113810602A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN113810602B (en) Shooting method and electronic equipment
CN112532857B (en) Shooting method and equipment for delayed photography
WO2023016026A1 (en) Photographing method and device, storage medium and computer program product
CN111327814A (en) Image processing method and electronic equipment
CN105609035B (en) Image display device and method
CN113965694B (en) Video recording method, electronic device and computer readable storage medium
CN112887582A (en) Image color processing method and device and related equipment
CN115604572B (en) Image acquisition method, electronic device and computer readable storage medium
CN116095476B (en) Camera switching method and device, electronic equipment and storage medium
CN114466134A (en) Method and electronic device for generating HDR image
WO2023160295A1 (en) Video processing method and apparatus
CN114463191A (en) Image processing method and electronic equipment
CN115567630A (en) Management method of electronic equipment, electronic equipment and readable storage medium
US12088908B2 (en) Video processing method and electronic device
CN117201930B (en) Photographing method and electronic equipment
CN117135257B (en) Image display method, electronic equipment and computer readable storage medium
CN116668838B (en) Image processing method and electronic equipment
US20240155236A1 (en) Image processing method and electronic device
CN116048323B (en) Image processing method and electronic equipment
CN115705663B (en) Image processing method and electronic equipment
CN113891008B (en) Exposure intensity adjusting method and related equipment
CN114915722B (en) Method and device for processing video
CN108335659A (en) Method for displaying image and equipment
CN115633250A (en) Image processing method and electronic equipment
CN117119316B (en) Image processing method, electronic device, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant