CN114640783B - Photographing method and related equipment - Google Patents

Photographing method and related equipment Download PDF

Info

Publication number
CN114640783B
CN114640783B CN202011482589.5A CN202011482589A CN114640783B CN 114640783 B CN114640783 B CN 114640783B CN 202011482589 A CN202011482589 A CN 202011482589A CN 114640783 B CN114640783 B CN 114640783B
Authority
CN
China
Prior art keywords
image
isp
mobile phone
guide
preview
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011482589.5A
Other languages
Chinese (zh)
Other versions
CN114640783A (en
Inventor
武小宇
吕飞帆
刘子鸾
张运超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011482589.5A priority Critical patent/CN114640783B/en
Priority to PCT/CN2021/135118 priority patent/WO2022127611A1/en
Publication of CN114640783A publication Critical patent/CN114640783A/en
Application granted granted Critical
Publication of CN114640783B publication Critical patent/CN114640783B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The embodiment of the application provides a photographing method and related equipment, and relates to the technical field of image processing. So that the electronic device can simulate the shooting styles of shooting images of other devices to generate images of other shooting styles. The specific scheme is as follows: the photographing method is applied to electronic equipment, the electronic equipment comprises an image signal processor ISP parameter, and when the image signal processor processes a first raw domain image by adopting the preset ISP parameter, a first image can be generated. The electronic equipment acquires a first guide graph, determines a first ISP parameter corresponding to the first guide graph, and the preset ISP parameter is different from the first ISP parameter; and setting the preset ISP parameters as the first ISP parameters. When the image signal processor processes the first raw domain image by adopting the first ISP parameter, a second image is generated.

Description

Photographing method and related equipment
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a photographing method and related equipment.
Background
The existing electronic equipment has a photographing function, and in the photographing process of the electronic equipment, light rays reflected by a target object are projected on an image sensor through a lens, so that the image sensor generates an original raw domain image (or a raw image). The image sensor transmits the raw domain image to an image signal processor (Image Signal Processing, ISP), which can convert the raw domain image into an image in a format such as a portable network graphic (Portable Network Graphics, JPG). Wherein, the ISP can comprise a plurality of algorithm modules, and each algorithm module can realize different functions, so that the ISP can convert the raw domain image into an image. For example, the ISP includes a down module, a Color Correction (CC) module, and the like, the down module has a denoising function, and the CC module has a function of correcting Color errors.
If the parameters of the modules in the ISP are different, the processing effect of the modules on the image is also different, so that the image generated by the ISP is slightly different. For example, when the target objects photographed by two electronic devices (for example, the electronic device 1 and the electronic device 2) are the same, the parameters of the CC modules used by the two electronic devices ISP are different, the electronic device 1 generates the image 1, the electronic device generates the image 2, and the colors (such as blue) displayed by the image 1 and the image 2 are slightly different. In this case, we call the shooting style of the two images different. If the parameters in the ISP modules of the manufacturers of different electronic devices are different, the shooting styles of the shooting images obtained by the electronic devices of different manufacturers are different, and even the shooting styles of the shooting images obtained by the electronic devices of different models designed by the same manufacturer are different.
It will be appreciated that for any electronic device, the electronic device is in a photographing mode, and the photographing style of the generated photographed image is fixed. That is, in generating a photographed image, the electronic device processes the image by the ISP in the electronic device to generate a photographed image corresponding to the photographing style. In this case, it is difficult for the electronic apparatus to generate a photographed image of another photographing style.
Disclosure of Invention
The application provides a photographing method and related equipment, which enable electronic equipment to simulate the photographing styles of photographed images of other equipment and generate images of other photographing styles.
In order to achieve the technical purpose, the application adopts the following technical scheme:
in a first aspect, the present application provides a photographing method, which may be applied to an electronic device. The electronic device comprises preset ISP parameters, and the image signal processor processes the first raw domain image by adopting the preset ISP parameters so as to generate a first image. The photographing method may include: the electronic equipment acquires the first guide diagram, determines a first ISP parameter corresponding to the first guide diagram, and the preset ISP parameter is different from the first ISP parameter. And setting the ISP parameter in the electronic equipment as a first ISP parameter, the image processor can process the first raw domain image by adopting the first ISP parameter to generate a second image.
It can be appreciated that, in the process of generating the first image, the image signal processor may call a preset ISP parameter to process the first raw domain image, so as to generate the first image. When the preset ISP parameter is set as the first ISP parameter in the electronic device, the image signal processor may invoke the first ISP parameter to process the first raw domain image to generate the second image. That is, if the ISP parameters called by the image signal processor are different, the generated image is different.
Generally, if ISP parameters in electronic devices are preset, images generated by different electronic devices are different. If the first equipment presets the first ISP parameter, the first equipment can generate a first guide chart; if the electronic device presets the ISP parameters, the electronic device may generate the first image. From the visual aspect of the user, the shooting styles of the first image and the first guide image can be intuitively found to be different. Therefore, when the electronic device determines the first ISP parameter corresponding to the first guide map, the first ISP parameter is used to process the raw domain image, so as to generate the second image. The electronic device achieves the purpose of simulating the shooting style of the first device, and generates images of other shooting styles, such as the shooting style of the first guide graph.
In one possible design manner of the first aspect, the electronic device determines the first ISP parameter corresponding to the first guide map before acquiring the first guide map. The method further includes the electronic device further launching a camera application, displaying a preview interface, the preview interface including the first control and the first preview image. The image signal processor processes the image signal by adopting preset ISP parameters to generate a first preview image. And responding to clicking operation of the first control by a user, and displaying at least one third image by the electronic equipment, wherein the at least one third image comprises the first guide graph.
That is, after the electronic device starts the camera application, the preview interface of the camera application includes a first control, where the first control is used to invoke the third image. The third image is an image generated using other ISP parameters (i.e., non-preset ISP parameters), the third image including the first guide map.
In another possible design manner of the first aspect, the electronic device may determine the first ISP parameter corresponding to the first guide map before acquiring the first guide map. The method may further include the electronic device launching a camera application and displaying a preview interface, the preview interface including the first preview image. The image signal processor in the electronic equipment processes the image signal by adopting preset ISP parameters to generate a first preview image. The electronic device may identify an individual object in the first preview image, display at least one fourth image based on the individual object in the first image, the at least one fourth image including the first guide map. Wherein each fourth image comprises at least one individual object.
It will be appreciated that the electronic device may initiate a camera application and the electronic device may automatically identify the first preview image in the current preview interface. The electronic device displays a fourth image including the individual object according to the individual object in the first preview image. Since the fourth image is a different image than the first image, the user can view images of individual objects generated by other devices through the electronic device.
In another possible design manner of the first aspect, the electronic device may determine the first ISP parameter corresponding to the first guide map before acquiring the first guide map. The method further includes, in response to receiving an operation to edit the first image, the electronic device displaying an editing interface of the first image, the editing interface of the first image including a second control. In response to a click operation of the second control by the user, the electronic device displays at least one third image, the at least one third image including the first guide map.
The electronic equipment is in an image editing state of the first image, and an image editing interface displayed by the electronic equipment comprises a second control. And responding to the click operation of the user on the second control, displaying a plurality of third images by the electronic equipment, wherein the third images comprise the first guide graph.
In another possible design of the first aspect, the image signal processor processes the first raw domain image using the first ISP parameter before generating the second image. The method may further include the electronic device displaying a preview interface including a second preview image, the image signal processor processing the image signal using the first ISP parameter to generate the second preview image. And triggering a shooting key to acquire a first raw domain image.
The electronic equipment receives selection operation of a user on the first guide graph when the electronic equipment runs the camera application, and a preview image is displayed. Because the electronic equipment sets the preset ISP parameters as the first ISP parameters, the image in the preview interface of the electronic equipment is the second preview image. The second preview image is different from the first preview image.
In another possible design manner of the first aspect, the foregoing response to receiving an operation of editing the first image, displaying an editing interface of the first image, where before the editing interface of the first image includes the second control, the electronic device may further obtain a second raw domain image, generate the first raw domain image, and store the first raw domain image, where the first raw domain image is the same as the second raw domain image. The image processor may process the second raw domain image using preset ISP parameters to generate the first image.
The electronic equipment sets preset ISP parameters as first preset parameters, and processes the raw domain image by adopting the first ISP parameters to generate a second image. Therefore, before the electronic device generates the first image, the electronic device stores the first raw domain image corresponding to the first image, so that the electronic device can provide the function of modifying the shooting style of the image when editing the first image.
In another possible design of the first aspect, the first ISP parameter is obtained by processing the first guide map through a neural network.
It will be appreciated that the electronic device may also obtain the first ISP parameter corresponding to the first guide map in other manners, which are only examples herein.
In another possible design manner of the first aspect, the electronic device may score the at least one third image through the image scoring model, and display the at least one third image in a scoring order of the at least one third image.
In another possible design manner of the first aspect, the electronic device may further score the at least one fourth image through the image scoring model, and display the at least one fourth image according to the score of the at least one fourth image.
The image scoring model scores each image, so that the electronic equipment can display a plurality of images according to the scoring of the images, and a display basis is provided for displaying the plurality of images.
In another possible design manner of the first aspect, the image processor processes the second raw domain image by using a preset ISP parameter, and after generating the first image, the electronic device may further display a prompt message, where the prompt message may be used to prompt the user for a preset duration, and trigger editing the first image. And if the operation of editing the first image is not received within the preset time, deleting the first raw domain image.
The method comprises the steps that the raw domain image occupies a large memory space, and the electronic equipment prompts a user to use the raw domain image within a preset time, so that the memory occupation amount of the electronic equipment can be reduced after the preset time, and the operation of the electronic equipment is ensured.
In a second aspect, the present application further provides a photographing apparatus, where the photographing apparatus is included in an electronic device. The apparatus has a function of implementing the behavior of the electronic device in the first aspect and any of the possible design manners thereof, so that the electronic device executes the photographing method in the first aspect and any of the possible design manners thereof. The function may be implemented by hardware or software that executes a response by hardware. The hardware or software includes at least one module or unit as opposed to the functions described above. For example, the photographing apparatus may include a display unit, a determination unit, a generation unit, and the like.
In a third aspect, the present application further provides an electronic device, including: the camera is used for collecting images; an image signal processor for processing the raw domain image; the display screen is used for displaying an interface; a memory, one or more processors; the one or more processors are coupled with the memory, the camera, the signal processor and the display screen; and one or more computer programs. Wherein one or more computer programs are stored in the memory, the one or more computer programs comprising instructions; the instructions, when executed by the electronic device, cause the electronic device to perform the photographing method of the first aspect and any of its possible designs.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium comprising computer instructions that, when run on an electronic device, cause the electronic device to perform the photographing method of the first aspect and any of its possible designs described above.
In a fifth aspect, embodiments of the present application provide a computer program product, which when run on a computer causes the computer to perform the photographing method performed by the electronic device in the first aspect and any of its possible designs.
In a sixth aspect, embodiments of the present application provide a chip system that is applied to an electronic device. The system-on-chip includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected through a circuit; the interface circuit is used for receiving signals from the memory of the electronic device and sending signals to the processor, wherein the signals comprise computer instructions stored in the memory; the computer instructions, when executed by a processor, cause the electronic device to perform the photographing method of the first aspect and any of its possible designs.
It may be appreciated that the benefits achieved by the photographing apparatus of the second aspect, the electronic device of the third aspect, the computer readable storage medium of the fourth aspect, the computer program product of the fifth aspect and the chip system of the sixth aspect provided in the present application may refer to the benefits as in the first aspect and any possible design manners thereof, and are not repeated herein.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an ISP module according to an embodiment of the present application;
fig. 3 is a schematic software architecture of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic view of a photographing interface of a mobile phone in a photographing mode according to an embodiment of the present application;
fig. 5 is an interface schematic diagram of a mobile phone in a photographing mode according to an embodiment of the present application;
fig. 6 is an interface schematic diagram of an image edited by a mobile phone according to an embodiment of the present application;
fig. 7A is an interface schematic diagram of another mobile phone editing image according to an embodiment of the present application;
fig. 7B is a schematic diagram of a second shot image according to an embodiment of the present application;
fig. 7C is a schematic view of a first captured image according to an embodiment of the present application;
fig. 8 is a flowchart of a photographing method according to an embodiment of the present application;
fig. 9 is a schematic diagram of a mobile phone display interface in another photographing mode according to an embodiment of the present application;
fig. 10 is a schematic view of an application scenario of a photographing method according to an embodiment of the present application;
fig. 11 is a flowchart of a photographing method according to an embodiment of the present application.
Detailed Description
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
In general, after the ISP parameters of the electronic device are determined, the shooting style of the electronic device to shoot the generated image is determined. That is, the shooting style of the electronic device generating the image is determined by the ISP parameters. If a user is in the process of generating an image by using the electronic device, the user has a need to shoot images with other shooting styles. In order to enable the electronic device to shoot and generate images of other shooting styles, the electronic device may simulate the shooting style of the guide map according to the acquired guide map (i.e., the shooting image generated when the other device shoots the target object) and generate images identical to the shooting style of the guide map.
In a first implementation, an electronic device builds an ISP-based convolutional neural network (Convolutional Neural Network, CNN) end-to-end model. The electronic equipment acquires the guide graphs, inputs any guide graph into a CNN end-to-end model, and runs the CNN model to obtain ISP parameters of the guide graph. The guide map ISP parameters are ISP parameters corresponding to the electronic equipment for generating the guide map. If the electronic device updates the ISP parameters to the guide map ISP parameters, the shooting style of the image generated by the electronic device is the same as the shooting style of the guide map.
In the implementation mode, the ISP parameters of the guide map are obtained when the CNN end-to-end model is in an offline state. The electronic device runs the camera application and is in a photographing mode, and after the ISP parameters of the guide map are determined, the ISP parameters of the electronic device cannot be adjusted in real time. The electronic device needs to perform multiple iterations of the ISP parameters to change the parameters in the ISP module to the ISP parameters of the device generating the directed graph, and the parameter iteration process makes the debugging time of the ISP parameters longer.
In a second implementation manner, the electronic device acquires the guide graph, learns the shooting style of the guide graph, and obtains the shooting style characteristics of the guide graph. Such as color features, brightness features, image gray scale features, etc. in the guide map. In the process of generating an image by the electronic equipment, the ISP builds an end-to-end model based on hardware in a ring, and the shot image is processed by utilizing an algorithm, so that the shooting style characteristics of the image are the same as those of the guide graph. Thus, the purpose of simulating the shooting style of the guide map can be achieved. The same features as the first implementation are as described above: this implementation is to acquire the shooting style characteristics of the guide map offline. And (3) utilizing algorithm processing to enable the image shot by the electronic equipment to be the same as the shooting style characteristic of the guide graph. That is, no adjustment of ISP parameters is required in the process of generating an image. Therefore, when the electronic device simulates the shooting style of the guide map, it is necessary to create a quality evaluation algorithm to determine whether the shooting style of the image obtained by the ISP is similar to the shooting style of the guide map. The quality evaluation algorithm is complex and the setting difficulty is high.
In a third implementation, the electronic device obtains a guide graph, determines a shooting style of the guide graph, and determines shooting style characteristics of the guide graph using an countermeasure network (Generative Adversarial Nets, GAN). And fusing the image generated by shooting by the electronic equipment with the shooting style of the guide graph to achieve the purpose of simulating the shooting style of the guide graph.
This approach is a process in terms of image imaging display, and does not modify the parameters of the modules in the ISP. When the shooting styles of the image shot by the electronic equipment and the guide image are fused, the phenomenon that artifacts exist on the image generated by the electronic equipment cannot be avoided. Wherein the artifact is broadly referred to as image distortion. For example, shadows of motion or movement not present in the target object appear in the image.
The embodiment of the application provides a photographing method which can be applied to electronic equipment. The electronic device may learn the shooting style of the guide graph to determine parameters (or referred to as first ISP parameters) of the modules in the ISP of the device generating the guide graph, and the electronic device modifies the parameters in the ISP modules to the first ISP parameters. Therefore, when the electronic equipment generates the shot image, the shooting style of the shot image obtained by the electronic equipment is the same as the shooting style of the guide graph, and the purpose of simulating the shooting style of the guide graph is achieved.
It should be noted that, the electronic device in the embodiment of the present application may be a mobile phone, a digital camera, a video camera, a motion camera (golro), a smart watch, a tablet computer, a desktop, a laptop, a handheld computer, a notebook, a vehicle-mounted device, a vehicle recorder, an ultra-mobile personal computer personal computer (UMPC), a netbook, a cellular phone, and the like, which have an image capturing function. In addition, the electronic device may be a vehicle-mounted computer, a personal digital assistant (personal digital assistant, PDA), an augmented reality (augmented reality, AR) \virtual reality (VR) device, etc., and the specific form of the electronic device is not particularly limited in the embodiments of the present application.
The implementation of the examples of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 1, the electronic device 100 may include a processor 110, an internal memory 121, a key 130, a charge management module 140, a power management module 141, a battery 142, a sensor module 180, a camera module 150, a display 160, and the like. Wherein the sensor module 180 may include: a gyroscope sensor, a direction sensor, an acceleration sensor, a distance sensor, a touch sensor, an ambient light sensor, and the like.
It should be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an ISP, a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and a neural Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), and/or a general-purpose input/output (GPIO) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses, and the processor 110 may couple the camera 150 through the I2C buses.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as the display screen 160, the camera 150, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 150 communicate through a CSI interface to implement the photographing function of electronic device 100. The processor 110 and the display screen 160 communicate via a DSI interface to implement the display functionality of the electronic device 100.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present invention is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The internal memory 121 may be used to store computer-executable program code that includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 implements display functions through a GPU, a display screen 160, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 160 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 160 is used to display images, videos, and the like. The display screen 160 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 160, N being a positive integer greater than 1.
The electronic device 100 may implement shooting functions through an ISP, an NPU, a camera module 150, a video codec, a GPU, a display screen 160, an application processor, and the like.
The keys 130 may include a power on key, a volume key, a zoom key, a photographing key, etc. The key 130 may be a mechanical key or a touch key.
The camera module 150 is used to capture still images or video. The camera module 150 includes a lens 151 and an image sensor 152, and the lens 151 may be composed of a plurality of lenses. Light reflected by the target object is transmitted through the lens 151 to the image sensor 152 (i.e., a photosensitive element), and the image sensor 152 may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The image sensor 152 may generate a raw domain image of the target object and transmit the raw domain image to the ISP. The ISP is used for generating image signals by adopting the raw domain images, the ISP transmits the image signals to the DSP for processing, and the DSP converts the digital image signals into standard RGB, YUV and other image signals. The display screen 160 is used for displaying image signals in RGB, YUV, and the like formats. In some embodiments, the electronic device 100 may include 1 or N camera modules 150, N being a positive integer greater than 1.
The ISP is used to process the data fed back by the image sensor 152. For example, when photographing, the shutter is opened, light is transmitted to the image sensor 152 in the camera module 150 through the lens, the optical signal is converted into an electrical signal to form an original image in the Bayer format, and the camera photosensitive element transmits the original image in the Bayer format to ISP processing to be converted into an image visible to the naked eye. The ISP can process the original image in the Bayer format using preset parameters, and thus, after the ISP parameters are determined in the electronic device 100, the photographing style of the image generated by the electronic device 100 is also determined. In some embodiments, the ISP may be provided in the camera 150. In other embodiments, an ISP may be located in processor 110.
In some embodiments, the ISP may be considered a system-on-a-chip, running various algorithmic programs for processing image signals in real-time. For example, the ISP can perform algorithm optimization on noise, brightness and skin color of a person in the image, and also optimize parameters such as exposure, color temperature and the like of a shooting scene. The ISP receives the raw domain image transmitted by the image sensor, and the ISP can convert the raw domain image into an macroscopic image. The mobile phone is provided with an ISP database, and the ISP database comprises parameters for running various algorithm programs by the ISP. And when the ISP runs, the ISP database is called, so that the ISP runs various algorithms according to parameters in the ISP database, and the raw domain image is converted into an image in a preset format.
It can be understood that, according to the conversion of the raw domain image by the ISP into the image with the preset format, the ISP needs to have multiple functions, and if the ISP is divided into modules according to the functions of the ISP, the ISP may include multiple algorithm modules.
For example, please refer to fig. 2, which is a schematic diagram of an algorithm module structure of an ISP. As shown in fig. 2, the ISP may include modules such as black level correction (Black Level Correction, BLC), raf, CC, dynamic range compression (Dynamic Range Correction, DRC), GAMMA (GAMMA), sharpening (sharp), and three-dimensional color lookup Table (3D Look-Up Table,3D LUT).
The Rawnf module is used for carrying out noise reduction processing on the raw domain image by adopting a Non-Local mean algorithm (NLM). NLM can process pixel points with noise in the raw domain image, and the gray value of one pixel point is obtained by weighted average of the gray values of the pixel points around the pixel point, so that the purpose of image denoising is achieved. When the NLM algorithm is adopted to process the raw domain noise in the raw domain image, the running time of the NLM algorithm is inversely proportional to the noise reduction effect of the raw domain image. Namely, the longer the running time of the Rawnf module is, the better the noise reduction effect of the raw domain image processed by the Rawnf module is.
CC may be used to correct color errors in the image. The color correction process is as follows: and comparing the raw domain image obtained by the ISP with a preset standard image to obtain a correction matrix. The correction matrix comprises the difference value between the color in the pixel of the raw domain image and the color in the preset standard image, and the correction matrix is used for correcting the color in the Bayer format image generated by the image sensor, so that the color in the shooting image generated by the mobile phone is closer to the real color of the target object.
It is noted that the color correction in the image is accompanied by an adjustment of the color saturation in the image. The saturation of the color refers to the purity of the color in the image, and the higher the purity of the color is, the more vivid the color the image presents; the lower the purity of the color, the darker the color at the image presentation.
The dynamic range in DRC refers to the ratio of the gray level of the pixel in the brightest part to the pixel in the darkest part in the photographed image generated by the mobile phone. When a strong light source (such as a lighting device facing sunlight, strong reflection and high brightness) exists in a scene photographed by the mobile phone, a highlight area and a dim light area exist in the photographed scene of the mobile phone at the same time, the highlight area may have an overexposure phenomenon, and the dim light area may have an underexposure phenomenon. The highlight area appears in the shooting image generated by the mobile phone and is white due to overexposure, and the dark area appears black due to underexposure, so that the image quality is affected. The DRC can solve the problems of overexposure of a highlight region and underexposure of a dark region, so that the highlight region and the dark region in a shooting image generated by a mobile phone show corresponding images.
The GAMMA correction module is used for compensating the preview image or the shooting image displayed by the display when the GAMMA encoding is carried out on the image, so that people can perceive light or brightness in the preview image or the shooting image. It will be appreciated that the gamma correction module may maximize the utilization of the gray data bits or bandwidth in the image data representing the pixels in the image. When the display in the mobile phone displays the preview image or shoots the image, the brightness value generated by the conversion function in the brightness area of the display image is proportional to the energy (called gamma) of the brightness amplitude in the image. In this case, the range of luminance values in the preview image or the photographed image displayed by the display is expanded.
The Sharp module is used for sharpening the image generated by the mobile phone and restoring the related details in the image. Wherein, after the image sensor collects the original image, the image is input into the ISP. Various noise may be included in the image input by the CMOS, including random noise, quantization noise, fixed pattern noise, and the like. The ISP module needs to perform noise reduction processing on the original image, and in the noise reduction processing process, details in the image are eliminated, so that the phenomenon of insufficient definition of the image exists. In this case, in order to avoid loss of detail in the image during the noise reduction process, the image is sharpened to improve the display effect of the image.
The 3D LUT module is a three-dimensional color look-up table in the image, where the LUT can convert the input values of each set of RGB into output values. For example, when the LUT receives a signal, it receives a table lookup address, and obtains the content corresponding to the signal through table lookup and outputs the content, and the display can display the content and output the content. The 3D LUT can accurately describe all colors, and the 3D LUT can be used to process the problem of nonlinear properties of colors during image display, and can also be used to process the problem of wide variation of colors, and the like. Thus, the 3D LUT may be used for accurate color calibration work. Color calibration problems in preview or captured image display, such as gamma values of colors, color ranges, color range tracking errors, correction of non-linear properties in color display, color crosstalk, color saturation, etc., can be handled using the 3D LUT.
It will be appreciated that fig. 2 in the embodiments of the present application shows only some of the modules in an ISP. The above examples do not constitute a specific limitation on the modules in an ISP, more or fewer modules may be included in an ISP.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 3 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 3, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 3, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The workflow of the electronic device 100 software and hardware is illustrated below in connection with a photo scene.
Taking the electronic device as an example of a mobile phone, when the mobile phone runs a camera application, a lens in a camera module of the mobile phone is in an on state. The mobile phone displays an interface of the camera application, the shooting key is triggered, the touch sensor receives touch operation, and hardware interrupt of the camera is sent to the kernel layer. The kernel layer processes the touch operation into the original input event (including information such as touch coordinates, time stamp of touch operation, etc.). The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies a control corresponding to the input event. The touch operation is an operation of touching the photographing key, and the control corresponding to the operation is a control of the photographing key in the interface of the camera application. The camera application calls an interface of the application framework layer, starts a camera driver through the kernel layer, and captures a still image.
The implementation methods in the following embodiments may be implemented in an electronic device having the above-described hardware configuration.
The photographing method provided by the embodiment of the application can be applied to the following application scenes.
Scene one: the electronic device generates an image.
The user may generate an image using the electronic device. Specifically, taking the example that the electronic device is a mobile phone, a camera application (may also be simply referred to as a camera) is installed in the mobile phone. For example, when the mobile phone displays a main interface as shown in fig. 4 (a), the main interface may include application icons of a plurality of applications, such as a camera application 401, an address book application 402, a telephone application 403, and an information application 404. The user clicks on the icon of the camera application 401 in the main interface shown in fig. 4 (a), and the handset displays a preview interface including a preview image 405 as shown in fig. 4 (b). At this time, the preview image 405 is an image acquired by the camera of the mobile phone. The image displayed on the preview interface of the mobile phone is the same as the shooting style of the image generated by the mobile phone. Therefore, the user can know the shooting style of the image generated by the mobile phone by looking at the mobile phone interface. If the user wishes to change the shooting style of the image generated by the mobile phone, the mobile phone can operate the shooting method provided by the embodiment of the application, so that the mobile phone changes the shooting style of the generated image.
The mobile phone may display a shooting preview interface shown in (a) of fig. 5, and the user may look at the mobile phone interface, and may find that the shooting preview interface shown in (a) of fig. 5 includes: preview image 405 and mimic shooting style 406 controls. As the name suggests, the effect of the simulated shooting style 406 is that the handset may simulate the shooting style of other images, and if the user wishes to change the shooting style of the handset-generated image, the user may click on the simulated shooting style 406 control. The mobile phone receives the user's operation of the simulated shooting style 406 control, and displays a shooting preview interface as shown in fig. 5 (b), which includes a first guide diagram 407, a second guide diagram 408, and a third guide diagram 409. It can be understood that the shooting preview interface shown in (b) of fig. 5 includes a plurality of guide graphs (i.e., images having different shooting styles from those of the images generated by the mobile phone), and when the user views the shooting preview interface shown in (b) of fig. 5, if the user likes the shooting style of the first guide graph 407, the user can select the first guide graph 407. The mobile phone receives the selection operation of the user on the first guide chart 407, the mobile phone can determine the shooting style simulating the first guide chart, and the mobile phone determines the first ISP parameters corresponding to the first guide chart 407 (i.e. the ISP parameters in the electronic device generating the first guide chart). In this way, the mobile phone can set the ISP parameters as the first ISP parameters and display a photographing preview interface including a preview image 410 as shown in fig. 5 (c). Since the ISP parameter in the mobile phone is the first ISP parameter, the shooting style of the preview image 410 is the same as that of the first guide map 407.
For example, if the user is satisfied with the image in the current cell phone photographing preview interface, the user may click the photographing key 411 in the photographing preview interface as shown in (c) of fig. 5. The mobile phone receives the operation that the photographing key 411 is triggered, and generates an image.
Also by way of example, if the user is not satisfied with the image in the current cell phone capture preview interface, the user may click on the simulated capture style 406 control in the capture preview interface as shown in fig. 5 (c). That is, the user is not satisfied with the shooting style of the first guide image simulated by the mobile phone, and wants to change the shooting style of the image in the shooting preview interface of the mobile phone again. In this case, the mobile phone receives the user's operation of the simulated shooting style 406 control, and displays a shooting preview interface as shown in fig. 5 (d), which includes an original drawing 412, a second guide drawing 408, and a third guide drawing 409. The user views the photographing preview interface as shown in fig. 5 (d), and reselects any one of the guide diagrams so that the mobile phone simulates the photographing style of the guide diagram.
Note that the original image 412 represents an image generated by the mobile phone before the mobile phone ISP parameters are not adjusted (or may be understood as the mobile phone initial ISP parameters), that is, the preview image 405 shown in fig. 4 (b).
Scene II: the electronic device edits the photographed image.
The user uses the mobile phone to generate the image, and when the user has a requirement for editing the image, the user can edit the image by using the mobile phone. Specifically, when the electronic device is a mobile phone and the mobile phone displays a shooting preview interface as shown in fig. 4 (b), a shooting key of the mobile phone is triggered, and the mobile phone generates an image 601.
Take the example that the user wants to edit the image 601.
When viewing the image 601 captured by the mobile phone, the mobile phone displays an image display interface as shown in fig. 6 (a). The image display interface as shown in fig. 6 (a) includes an image 601, a sharing control 61, a collection control 62, an editing control 63, a deletion control 64, and a further control 65.
When the user wants to edit the preview image 601, the user clicks the edit control 63. The mobile phone receives the triggering operation of the editing control 63 by the user, and displays an image editing interface as shown in fig. 6 (b). The image editing interface includes a simulated shooting style control 66, an exposure control 67, a contrast control 68, and the like. The image editing interface shown in fig. 6 (b) further includes names of simulated shooting style controls 66, where when any of the functional controls is located at the simulated shooting style control 66, the name of the corresponding functional control may be displayed.
During editing of the image 601 by the user using the mobile phone, if the user wishes to change the shooting style of the image 601, the user may trigger the simulated shooting style control 66 to change the shooting style of the image 601. Wherein the handset receives the user's operation of the simulated shooting style control 66, the handset displays a shooting style editing interface as shown in fig. 6 (c). The shooting style editing interface includes a first guide 407, a second guide 408, and a third guide 409. It will be understood that the guide diagram shown in fig. 6 (c) is merely an example, and when the mobile phone receives the sliding operation of the user in the display area of the guide diagram, the mobile phone may display other guide diagrams.
When the user views the guide map through the interface shown in fig. 6 (c), the user may select any one of the guide maps. In this way, the mobile phone can convert the shooting style of the image 601 into the shooting style of the guide map, so that the mobile phone changes the shooting style of the image 601. Taking the example of the user selecting the first guide 407. The mobile phone displays an image editing interface as shown in fig. 6 (d), which includes an image 602, and the photographing style of the image 602 is the same as that of the first guide 407.
For example, a mobile phone generates a photographed image, and a user views the image 601 photographed by the mobile phone through an album application. The mobile phone receives the operation of running the camera application, runs the camera application, and displays an image 601. If the handset receives a user's operation of the simulated shooting style control 66, the handset may display a guide map in the album. That is, the camera application has the right to access the album application so that the handset can display the guide in the album.
Specifically, the mobile phone determines a first ISP parameter corresponding to the first guide map, and sets the ISP parameter of the mobile phone as the first ISP parameter. The handset may input the raw domain image corresponding to the image 601 into the ISP to obtain the image 602. It should be noted that, since the mobile phone is an image 601 generated based on the raw domain image transmitted by the image sensor, if the shooting style of the image is changed, the ISP is required to process the raw domain image again. Therefore, when the image 601 is generated, the raw domain image corresponding to the image 601 is stored so that the mobile phone can edit the shooting style of the image 601 when editing the image.
Illustratively, if the user is satisfied with the shooting style of the image 602, the user does not trigger the simulated shooting style control 66. In this case, the user can complete editing of the image capturing style.
Also by way of example, if the user is not satisfied with the shooting style of the image 602, the user may again trigger the simulated shooting style control 66 as shown in fig. 6 (d). When the handset receives user manipulation of the simulated shooting style control 66 as in fig. 6 (d), the handset can display an image editing interface as shown in fig. 7A. The image editing interface shown in fig. 7A includes an image 601, a second guide map 407, and a third guide map 408. The user views the image editing interface as shown in fig. 7A, and reproducibly selects any one of the guide diagrams so that the mobile phone simulates the shooting style of the guide diagram and generates an image corresponding to the shooting style.
In general, when an electronic device generates a first captured image, an image sensor acquires a raw domain image, the image sensor transmits the raw domain image to an ISP, and the ISP processes the raw domain image to generate the first captured image. That is, in the process of generating the first captured image, the electronic device does not store the raw domain image of the first captured image. When the electronic device executes the photographing method provided in the embodiment of the present application, the electronic device obtains a raw domain image, copies the raw domain image to obtain a first raw domain image and a second raw domain image, and inputs the first raw domain image into the ISP to obtain a first photographed image (such as the image 601 described above). The electronic device stores the second raw domain image such that when the electronic device receives an editing operation mimicking the guide map shooting style, the electronic device processes the second raw domain image to generate a second shot image (e.g., image 602 described above). It should be noted that, in order to ensure the smoothness of operation of the electronic device, after the electronic device generates the first shot image, if the electronic device does not receive the image editing operation of simulating the shooting style within the preset duration, the electronic device may delete the second raw domain image.
The preset time period may be 1 minute, 3 minutes, 5 minutes, or the like, and is not particularly limited. It will be appreciated that the preset duration is the time provided to the user to consider image processing. In some implementations, after the raw domain image stored in the mobile phone, a prompt message may be displayed, where the prompt message is used to prompt the user that the image capturing style can be edited within a preset duration (e.g., within 3 minutes), and after the preset duration, the image capturing style cannot be edited. For example, the prompt information may be: the image shooting style can be adjusted within 3 minutes, and the obsolete one cannot.
When the ISP parameters in the mobile phone are preset ISP parameters, the mobile phone generates a second shot image, and the second shot image is an image of the first shooting style. The first guide image is an image of a second shooting style, and the first shooting image is an image of the second shooting style. Referring to fig. 7B, a second shot image (i.e., an image of the first shooting style) is generated for the mobile phone. The target object included in the second shot image is: sky, cloud, tree, and path. As shown in fig. 7B, the brightness of the sky and the white cloud is higher, and the brightness of the tree and the small diameter is darker, so that the brightness difference between the sky (the sky and the white cloud) and the ground object (i.e., the tree and the path) is larger, so that the image shown in fig. 7B cannot truly reflect the brightness of the shooting scene.
When the High-Dynamic Range (HDR) technology processes an image, a larger exposure Dynamic Range (i.e., a larger brightness difference) is adopted to image, so that the image generated by the electronic device can truly reflect the brightness in the shooting environment. Thus, HDR imaging can truly reflect the brightness of the brightest regions in an image as well as the brightness in the darkest regions. That is, the HDR processing capability in the mobile phone that generates the image shown in fig. 7B is poor, that is, the brightness of the brightest area and the brightness of the darkest area in the photographed scene cannot be truly reflected in fig. 7B.
In this case, the mobile phone may process the raw domain image corresponding to fig. 7B by using the first ISP parameter corresponding to the first director, which is selected as the first director with better HDR processing effect, so that the display effect of fig. 7B may be improved.
For example, the mobile phone generates a second shot image and stores a raw domain image corresponding to the second shot image. The first steering map is an HDR image, and the first steering map corresponds to a first ISP parameter. The ISP parameters in the mobile phone are set as the first ISP parameters, and the ISP processes the raw domain image corresponding to fig. 7B to obtain an image (first shot image) as shown in fig. 7C. As shown in fig. 7C, the brightness difference between the sky parts (sky and clouds) and the ground objects (i.e., trees and paths) becomes small. Compared with the image shown in fig. 7B, the image shown in fig. 7C can provide more dynamic range and image details, and the second shot image can better reflect the visual effect in the real environment seen by human eyes.
The following will explain the principle of acquiring ISP parameters corresponding to a guide map in the photographing method provided in the embodiment of the present application.
Continuing with the example where the electronic device is a mobile phone, the mobile phone may determine a first ISP parameter corresponding to the guide map (e.g., first guide map 407). If the ISP parameter in the mobile phone is set as the first ISP parameter, the shooting style of the image shot by the mobile phone is the same as the shooting style of the guide graph. In some implementations, the mobile phone may determine, via the neural network, a first ISP parameter corresponding to the first steering map.
Illustratively, the first ISP parameter corresponding to the guide map is determined by a neural network (or ISP parameter estimation model). If any guide graph is input into an ISP parameter estimation model, the ISP parameter estimation model is operated, and the ISP parameter estimation model outputs ISP parameters in equipment for generating the guide graph. In some implementations, an ISP parameter estimation model may be preset in the mobile phone, and the mobile phone inputs the guide map into the ISP parameter estimation model to obtain ISP parameters in the device that generates the guide map. In other implementations, an ISP parameter estimation model is set in a remote server (or cloud device), and the mobile phone obtains the guide map and transmits the guide map to the remote server. The remote server uses the guide graph as input of an ISP parameter estimation model, operates the ISP parameter estimation model to obtain ISP parameters in equipment for generating the guide graph, and sends the ISP parameters to the mobile phone.
In particular, the ISP parameter estimation model may be composed of a plurality of neural units, which may be a function, that convert an input signal to an output signal. The ISP parameter estimation model may be constructed by a neural network formed by coupling a plurality of individual neural units. In this configuration, the output of one neural unit may be the input of another neural unit. The ISP parameter estimation model may include multiple convolution layers, and the output signal of one neural unit may be used as the input signal for the next convolution layer. The input of each neural unit may be connected with a local reception domain of a previous layer of convolutional layer to extract features of the local reception domain, which may be an area composed of several neural units.
The ISP parameter estimation model is obtained after training and learning. The nerve unit comprises a plurality of weight matrixes, and training the ISP parameter estimation model is the process of training the weight matrixes in the nerve unit.
Illustratively, the training data for training the ISP parameter estimation model in embodiments of the present application includes a guide graph and ISP parameters (referred to as first ISP parameters) of the device that generated the guide graph. The guide map may capture an image generated by the target object by the electronic device and not subjected to image editing operations (e.g., operations such as toning, compositing, shading, chroma, and chroma modifications). The ISP parameter estimation model receives the directed graph and may infer ISP parameters (referred to as second ISP parameters) for the device that generated the directed graph, and the parameters in the ISP parameter estimation model are modified by fitting the first ISP parameters and the second ISP parameters. Training the ISP parameter estimation model is to iterate and correct parameters in the ISP parameter estimation model continuously, so that the second ISP parameters estimated by the ISP parameter estimation model according to the guide diagram are more and more close to the first ISP parameters. In this way, the ISP parameter estimation model may learn the relationship between the guide map and the ISP parameters in the device that generated the guide map, so that the ISP parameter estimation model may estimate the ISP parameters in the device that generated the guide map from any one of the guide maps.
It will be appreciated that if the ISP parameter estimation model uses a neural network, then the particular training process is closely related to the particular structure of the neural network, which is merely illustrative. In practical application, a reasonable training mode can be set according to the model structure of the ISP parameter estimation model.
In the above application scenario, when the mobile phone receives the user operation and determines that the user wants to adjust the shooting style of the image, the mobile phone may display multiple guide graphs. The mobile phone may display a plurality of guide graphs according to a preset display rule, and a display sequence of the plurality of guide graphs is illustrated below.
For example, each guide graph may be scored by the image scoring model, and the mobile phone may display multiple guide graphs according to the scoring level of the guide graph. The image scoring model may be an image algorithm (such as an image scoring algorithm), where the image scoring algorithm processes each guide graph separately and outputs a score for each guide graph. For example, the image scoring algorithm may generate a score for the guide map based on features such as color contrast, texture features, user preference, etc. in the guide map, where each feature may have a different weight. Wherein the contrast of the colors in the guide map indicates the difference in brightness level between the white in the brightest area and the black in the darkest area in the image, and the greater the contrast of the guide map, the more abundant the colors in the image, indicating that the device generating the guide map can present the more abundant the colors. The texture features of the guide map represent the surface properties of the target object corresponding to the guide map or a partial region in the guide map. User preference indicates the preference of the user for the guide, which is typically 1 or 0. If the guide graph is marked as "favorite", the user preference of the guide graph is 1; if the guide is not marked as "favorite", the user preference of the guide is 0.
It should be noted that, determining the color contrast of the guide map and the texture features of the guide map requires processing the guide map by using a corresponding image algorithm. The image scoring algorithm may include a corresponding algorithm for obtaining the contrast of the image and the texture feature of the image, so that the contrast of the guide image and the texture feature of the guide image may be obtained when the mobile phone runs the image scoring algorithm, and the mobile phone may determine the scoring of the guide image according to the contrast, the texture feature, and the like of the guide image.
Wherein, after the image scoring model scores each guide graph (or referred to as guide graph 1), the scores of the plurality of guide graphs 1 may be stored in the form of a table (e.g., referred to as a guide graph 1 scoring table). When the mobile phone displays a plurality of guide charts 1, the mobile phone can read the scoring table of the guide charts 1 and display the plurality of guide charts 1 according to the scoring sequence of the guide charts 1.
The photographing method provided in the embodiment of the present application will be described below with reference to the accompanying drawings.
Example 1
The photographing method provided by the embodiment of the application can be applied to electronic equipment, and the photographing method provided by the application is described here by taking the electronic equipment as an example of a mobile phone. The camera application (or called a camera) is installed in the mobile phone, and the photographing method provided by the application can be applied in the process of generating the image by running the camera application by the mobile phone. In this way, the handset can generate the same image as the guide map shooting style.
Referring to fig. 8, a flowchart of a photographing method provided in the present application is shown. As shown in fig. 8, the method includes steps 801-806.
Step 801: and the mobile phone runs a camera application, and displays a shooting preview interface, wherein the preview interface comprises a preview image of a first shooting style.
It should be noted that, the mobile phone is in a photographing mode, the camera in the camera module of the mobile phone is in an on state, the light reflected by the target object propagates to the lens in the camera module, and the light reflected by the target object propagates to the image sensor after being refracted by the lens, so that the image sensor generates a raw domain image of the target object. The raw domain image may be a Bayer (Bayer) format image, and may be referred to as a Bayer format raw domain image. The image sensor transmits the obtained raw domain image to an ISP, which converts the raw domain image into a color mode (RGB) map. Furthermore, the mobile phone can generate preview images from continuous multi-frame images, and the preview images are displayed on the preview interface, so that a user can view the shooting range of the camera of the mobile phone and the shooting style of the images generated by the mobile phone through the display screen in real time.
When the mobile phone is in a photographing mode, the camera module in the mobile phone can continuously expose, so that the mobile phone continuously collects multi-frame images, and a preview interface displays the continuous multi-frame images. That is, the images are previewed in a plurality of frames. Assuming that the image shot by the mobile phone is an image of the first shooting style, each frame of image in the preview image is an image of the first shooting style. Thus, the user can know the shooting style of the mobile phone through the preview image in the preview interface. Referring to fig. 4 (b), a schematic diagram of a photographed preview image is displayed for the mobile phone, and as shown in fig. 4 (b), a photographing interface displayed by the mobile phone includes a preview image 405 in a photographing mode.
Step 802: the mobile phone identifies a target object in the preview image, and displays at least one guide graph, wherein the guide graph comprises the target object in the at least one preview image.
The at least one guide image comprises a first guide image, and the shooting style of the guide image is different from that of the preview image.
After obtaining the photographed preview image, the mobile phone can identify the preview image and determine the target object included in the preview image. For example, as shown in fig. 4 (b), the preview image 405 is included in the photographed preview interface, the mobile phone recognizes the preview image 405, and determines that the target object in the preview image 405 includes the sky, the cloud, the building, and the tree. It will be appreciated that preview image 405 is a multi-frame image, and that preview images may also be referred to as a preview image stream.
In some implementations, the mobile phone may extract any frame of image from the preview image stream as a reference frame, identify a target object in the reference frame, and the target object included in the reference frame is the target object included in the preview image.
In other implementations, the mobile phone can arbitrarily extract at least two frames of images from the preview image stream as reference frames, identify target objects included in the reference frames, and take the target objects included in the reference frames as target objects of the preview images.
Illustratively, the preview interface in the photographing mode is shown in fig. 9 (a), the preview interface includes a preview image 405, the mobile phone recognizes that the target object in the preview image 405 includes sky, cloud, building and tree, and the mobile phone displays the interface of the photographed preview image shown in fig. 9 (b). At least one guide map (which may also be referred to as guide map 1), such as first guide map 407, second guide map 408, and third guide map 409, are included in the interface as shown in fig. 9 (b). For example, the target objects in the first guide map 407 include sky, trees, and water; the target objects in the second guide map 408 include sky, cloud, mountain, and water; the target objects in the third guide map 409 include sky, water, and trees. At least one target object in each of the three guide maps is identical to the target object in the preview image.
It will be appreciated that in the interface shown in fig. 9 (b), three guide images are superimposed over the preview image, obscuring part of the preview image. The display mode is merely an example, and in practical application, the guide chart may be displayed in the photographing interface in other display modes. For example, the mobile phone may display a preview interface in a photographing mode as shown in (b) of fig. 5. The three guide images are displayed below the preview image 405 and do not obstruct the preview image, so that the display effect of the preview image is not affected.
It should be noted that, the interface shown in fig. 9 (b) and the interface shown in fig. 5 (b) each include three guide graphs, and in practical application, the photographing preview interface may include more or less guide graphs. The present invention is merely an example, and is not limited to the display position and the display number of the guide map.
In one possible implementation, the mobile phone runs a camera application, displays a preview interface of photographing, and the preview interface comprises a simulated photographing style control. As shown in fig. 5 (a), the handset displays a preview interface of photographing including a simulated photographing style control 406, and in response to a user's operation of the simulated photographing style control 406, the handset displays a preview interface as shown in fig. 5 (b). Alternatively, as shown in fig. 5 (a), the handset displays a preview interface of photographing including a simulated photographing style control 406, and in response to a user's operation of the simulated photographing style control 406, the handset displays a preview interface as shown in fig. 9 (b).
For example, step 802 may be "in response to the first operation of the user, the mobile phone displays at least one guide image, where the capturing style of the guide image is different from the capturing style of the preview image. The preview interface of photographing displayed by the mobile phone comprises a simulated photographing style control, and the first operation can be triggering operation of the simulated photographing style control by a user.
It will be appreciated that when the handset displays multiple guide diagrams 1, the multiple guide diagrams 1 may be displayed according to the scoring of the guide diagrams 1. The scoring mode of fig. 1 may be obtained by using an image scoring model, which is not described herein.
For example, an image scoring model may be preset in the handset. For example, the mobile phone comprises an image processing module, and the image processing module comprises an image scoring algorithm. When the mobile phone runs the image processing module, the mobile phone can score the guidance chart 1. For example, the camera application has access to the album application, the camera application recognizes the guide chart 1 in the album, the cell phone runs the image processing module, and scores each guide chart 1. The mobile phone recognizes the guide chart 1 to obtain an image (such as a guide chart 2) of the target object in at least one preview image, and the mobile phone can display a plurality of guide charts 2 according to the score of the guide chart 2.
It should be noted that, the guide map 1 in the mobile phone may be an image of another shooting style generated by the mobile phone, which may be downloaded from the internet by the mobile phone, or may be an image in the shared data received by the mobile phone. The sources leading to fig. 1 are not particularly limited in this embodiment.
Also, as an example, in the process of generating an image by using the mobile phone, an application scenario in which the mobile phone interacts with a remote server (or cloud device) is taken as an example. As shown in fig. 10, a cell phone 901 interacts with a remote server 902 for the purpose of simulating a shooting style. The image scoring algorithm is preset in the remote server 902, and the remote server 902 scores a plurality of pieces of guide chart 1 and takes each piece of guide chart 1 as a label of the image. When the handset runs a camera application, displaying a photographed preview image, the handset 901 transmits the preview image to the remote server 902.
For example, remote server 902 may send multiple guide charts 1 to mobile phone 901 based on the preview image, mobile phone 901 may receive multiple guide charts 1, identify the labels of guide charts 1, determine the scoring of guide charts 1, and may display guide charts 1 according to the scoring of multiple guide charts 1.
For another example, remote server 902 may identify the preview image to determine the target object that the preview image includes, which will determine the guide map 2 of the guide map 1. Remote server 902 may send the guide map 2 to handset 901, handset 901 receives the guide map 2 from remote server 902, identifies the tags of the guide map 2, determines the scoring of the guide map 2, and displays the guide map 2 according to the scoring of the guide map 2.
It should be noted that if the handset 901 sends multiple guide graphs to the remote server 902, the remote server 902 may score the guide graph 1 and take the score of the guide graph 1 as the label of the guide graph 1. The remote device scores each guide figure 1 and sends the scored multiple guide figures 1 to the handset 901.
It should be noted that if the image scoring model is not set in the mobile phone and the mobile phone does not interact with the remote server, the mobile phone may display the guidance chart 1 according to any display sequence.
Step 803: in response to a selection operation of the first guide map, the handset determines a first ISP parameter for a device generating the first guide map.
It can be appreciated that if the parameters of the ISP are different in electronic devices manufactured by different manufacturers, the shooting style of the image generated by using the shooting function of the electronic device is different. When the user wants to generate images of other shooting styles by using the shooting function of the mobile phone, the mobile phone determines the image of the shooting style desired by the user (namely the first guide chart), and the mobile phone can simulate the shooting style of the first guide chart to generate the image with the same shooting style as the shooting style of the first guide chart so as to meet the requirement of the user.
Wherein the first ISP parameters of the device capturing the first guide map may be determined by an ISP parameter estimation model. The manner in which the mobile phone determines the first ISP parameters of the device that generates the first guide map is described in detail above, and will not be described herein.
Step 804: the mobile phone sets ISP parameters as first ISP parameters.
The mobile phone generates an image of shooting style 1, assuming that the mobile phone presets ISP parameters. For example, the preview image 405 is an image of shooting style 1. The first guide map is an image of a photographing style 2, and the photographing style 1 is different from the photographing style 2. The mobile phone sets the ISP parameter to the first ISP parameter, and the mobile phone may generate an image of the photographing style 2. Thus, the mobile phone can achieve the purpose of simulating the shooting style.
The memory of the mobile phone stores preset ISP parameter files, and the ISP can access the addresses of the preset ISP parameter files and call the preset ISP parameter files to analyze the preset ISP parameter files to obtain preset ISP parameters. When the ISP processes the image by adopting a preset algorithm, taking the preset ISP parameters as parameters in the algorithm, so that the ISP can realize the function of processing the image.
The preset algorithm may be understood as a functional relationship between input and output, and the preset algorithm includes parameters, where a value corresponding to the parameter may affect a processing effect of the algorithm. For example, the color correction algorithm in the ISP, and the processing parameters of the color correction algorithm are included in the ISP parameters. Processing parameters in a color correction algorithm are obtained after the analysis of preset ISP parameters, and when the ISP runs the color correction algorithm, relevant parameters in the ISP are called to realize the function of correcting the color of the image.
In some implementations, the mobile phone determines to generate a first ISP parameter corresponding to the first guide map, and the mobile phone system has the right to access and modify the preset ISP parameter file. The mobile phone generates a first ISP parameter file according to the first ISP parameter, and the mobile phone can change the preset ISP parameter file into the first ISP parameter file. The mobile phone can generate a first data packet from the first ISP parameter, and convert the first data packet into a data packet of a binary file, wherein the data packet of the binary file is the first ISP parameter file.
It will be appreciated that when the preset ISP parameter file is replaced with the first ISP parameter file, the ISP operation will call the first ISP parameter file to process the raw domain image. In this case, the handset may generate an image of shooting style 2.
Step 805: the mobile phone displays a preview interface of photographing, wherein the preview interface comprises a preview image of a second photographing style.
It can be appreciated that, when the mobile phone sets the preset ISP parameter as the first ISP parameter, the ISP pauses processing the raw domain image, which may affect the display of the preview image in the photographing mode. In this case, the mobile phone may pause displaying the preview image, such as displaying the first guide, or the mobile phone may display a prompt waiting for the preview image.
After the mobile phone sets ISP parameters as first preset parameters, the mobile phone displays a preview interface of photographing. Since the ISP parameters invoked by the ISP have been changed, the preview image displayed by the mobile phone is an image of the second shooting style. Thus, the user can know the photographing effect of the mobile phone by looking at the preview interface.
When the mobile phone displays the preview interface, the preview interface may further include a control for switching the shooting style. As shown in fig. 5 (c), the preview interface of the photographing mode includes an emulated photographing style control 406. If the handset receives a trigger operation of the simulated shooting style control 406 by the user, the handset may perform steps 802-805 described above.
Step 806: the mobile phone shooting key is triggered, and the mobile phone generates a first shooting image, wherein the shooting style of the first shooting image is the same as that of the first guide graph.
If the mobile phone displays the preview image of the first shooting style, the shooting key of the mobile phone is triggered, and the mobile phone generates the first shooting image. The first guide map is an image of the second shooting style, and then the first shooting image is also an image of the second shooting style.
It will be appreciated that when the mobile phone uses the first ISP parameter, if the photographing mode is switched to the video mode (or referred to as the video mode), the mobile phone may generate the video file. The video file is composed of multiple frames of images, and each frame of image is an image of a second shooting style.
In some implementations, the ISP parameters may be preset ISP parameters when the handset turns off the camera application. That is, each time the mobile phone starts the camera application, when the mobile phone displays the preview interface for photographing, the preview image in the preview interface is the preview image of the first photographing style.
Example 2
In the process of editing the image by the mobile phone, the photographing method provided by the embodiment of the application is adopted to obtain the image with the same photographing style as the guide image.
Please refer to fig. 11, which is a flowchart of a photographing method provided in the present application. As shown in fig. 11, the method includes steps 1101-1106.
Step 1101: and triggering a mobile phone shooting key, generating a second shooting image by the mobile phone, and storing a raw domain image corresponding to the second shooting image.
The second captured image is an image of the first capturing style. That is, when the ISP parameters are equal to the preset ISP parameters, the mobile phone generates a second shot image. It can be appreciated that when the mobile phone generates the first shot image, the raw domain image corresponding to the first shot image may also be stored.
Wherein, since the image shooting style is adjusted by changing ISP parameters, the mobile phone generates images of other shooting styles. Therefore, the mobile phone can store the raw domain image of the target object, so that the ISP in the mobile phone processes the raw domain image again to obtain other shooting styles (such as the first shooting style or the third shooting style).
The mobile phone is in a photographing mode, a photographing key of the mobile phone is triggered, and an image sensor acquires a raw domain image. The mobile phone copies the raw domain image to generate a first raw domain image and a second raw domain image, wherein the first raw domain image and the second raw domain image are identical. The first raw domain image is transmitted to the ISP, and the raw domain image is processed by the ISP to generate a second captured image. Meanwhile, the mobile phone stores the second raw domain image.
In some implementations, when the mobile phone generates the second shot image of the target object, the raw domain image acquired by the mobile phone image sensor is stored by default, so that a function of simulating the shooting style can be provided when the image is edited later.
In other implementations, because the raw domain image occupies a larger memory space, before the shooting key is triggered, the mobile phone can display prompt information, where the prompt information is used to prompt whether to edit the shooting style of the shooting image generated later. If the mobile phone receives the operation of editing the shooting style of the shooting image generated later, the mobile phone stores the raw domain image of the shooting image (namely the second shooting image). If the mobile phone receives the operation of not editing the shooting style of the shooting image generated later, the mobile phone does not store the raw domain image corresponding to the second shooting image, so that the memory space of the mobile phone is saved.
Step 1102: in response to the operation of editing the image, the mobile phone displays an image editing interface including the second captured image.
Wherein, the image editing interface includes a second captured image (i.e., image 601) as shown in fig. 6 (a). In the second scenario, the description of fig. 6 (a) is omitted here.
Illustratively, the mobile phone receives a click operation of the edit operation control 63 by the user, and determines that the user wants to edit the second captured image. In response to receiving a touch operation of the edit operation control 63 by the user, the mobile phone displays an interface as shown in fig. 6 (b), and the mobile phone can edit the second captured image in accordance with the operation of the user. The interface shown in fig. 6 (b) includes a simulated shooting style control 66, an exposure control 67, a contrast control 68, and the like, and the positions of the plurality of controls can be slid left and right in response to the left and right sliding operation. And when the corresponding control is positioned at the middle position, displaying the name of the control. As shown in fig. 6 (b), the simulated shooting style control 66 is located in an intermediate position, and the name "simulated shooting style" of the control is displayed.
Step 1103: the mobile phone receives the operation of adjusting the shooting style, and displays at least one guide chart.
Illustratively, when the mobile phone receives a click operation of the simulated shooting style control 66 from the user while displaying the interface as shown in fig. 6 (b), the mobile phone determines that the user has a need to adjust the shooting style of the second shot image. In response to a touch operation of the simulated shooting style control, the mobile phone displays an interface as shown in fig. 6 (c), and the interface for editing the first shot image includes a first guide 408, a second guide 408, and a third guide 409.
When the mobile phone displays multiple guide graphs, the content of the step 802 is the same as that of the embodiment 1, and will not be repeated here, it will be understood that the related technical details in the embodiment of the present application may also be applied to the embodiment 1.
Step 1104: and responding to the selection operation of the user on the first guide graph, and determining the first ISP parameters corresponding to the first guide graph by the mobile phone.
It will be appreciated that the first ISP parameters in the device generating the first guide map are determined by the ISP parameter estimation model described above. Therefore, the specific manner of determining the first ISP parameter with respect to the ISP parameter estimation model may refer to the above-mentioned related implementation steps, which are not described herein.
Step 1105: the mobile phone sets ISP parameters as first ISP parameters.
It should be noted that, the steps 1104 and 1105 are the same as the steps 803 and 804, and the implementation details of the related steps may be applied to the steps 804 and 805, which are not repeated herein.
Step 1106: the mobile phone processes the raw domain image by using the ISP to obtain a first shooting image, wherein the shooting style of the first shooting image is the same as that of the first guide image.
The mobile phone copies the original raw domain image to obtain a first raw domain image and a second raw domain image, and stores the second raw domain image. When the mobile phone ISP parameter is set as the first ISP parameter, the ISP receives the second raw domain image, and the ISP processes the second raw domain image to generate a first shooting image. Because the ISP parameter in the mobile phone is the first ISP parameter, the shooting style of the first shooting image is the same as that of the first guide chart.
It should be noted that, when the mobile phone generates the second shot image by using the second raw domain image, the memory space storing the second raw domain image may be released. In this case, the mobile phone will not be able to reproduce the photographed image of the other photographing style. Although the guide map may include at least one target object in the first captured image, the content in the guide map displayed by the mobile phone is generally not exactly the same as the content in the first captured image. If the user wants to know that other guide (e.g., the second guide) is selected, the phone generates a photographed image. Because the mobile phone generates the second shooting image according to the second raw domain image, the mobile phone is difficult to generate the image corresponding to the shooting style according to the second guide image.
In some implementations, the handset may copy the second raw domain image to obtain a third raw domain image, and use the third raw domain image as an input to the ISP to generate the second captured image. When the user views the second shot image, the mobile phone can display first prompt information, and the first prompt information indicates whether the user stores the second shot image or not. If the mobile phone receives a storage instruction for storing the second shooting image, the mobile phone stores the second shooting image. Further, the mobile phone can also display second prompt information, where the second prompt information is used to prompt the user whether to generate images of other shooting styles, and if the mobile phone receives an instruction to generate other shooting styles. The mobile phone displays at least one guide image again, wherein the first guide image is not included in the guide image (the shooting style of the first guide image is the same as the shooting style of the second shot image). In addition, if the mobile phone receives the instruction of not storing the second shot image, the mobile phone can display the second prompt information.
It can be appreciated that the mobile phone can repeatedly display the first prompt information and the second prompt information, so that the user can view the photographed image with the same style as that of the guide map photographing. If the mobile phone receives the image of other shooting styles, the mobile phone can delete the stored second raw domain image. And releasing the memory of the mobile phone so as to reduce the memory space of the mobile phone occupied by the mobile phone during operation.
In the above description, the electronic device is a mobile phone as an example, and when the electronic device is another device, the image may be generated by the above method. And will not be described in detail herein.
It will be appreciated that, in order to achieve the above-mentioned functions, the electronic device includes corresponding hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The embodiment of the application may divide the functional modules of the electronic device according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
The embodiment of the application also provides electronic equipment, which comprises: the camera module is used for collecting images; the display screen is used for displaying the preview image and the generated shooting image; one or more processors and one or more memories. The one or more memories are coupled to the one or more processors, the one or more memories being configured to store computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the related method steps described above to implement the photographing method of the above embodiments.
Embodiments of the present application also provide a chip system including at least one processor and at least one interface circuit. The processors and interface circuits may be interconnected by wires. For example, the interface circuit may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, the interface circuit may be used to send signals to other devices (e.g., processors). The interface circuit may, for example, read instructions stored in the memory and send the instructions to the processor. The instructions, when executed by a processor, may cause an electronic device to perform the various steps of the embodiments described above. Of course, the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
The embodiment of the application also provides a computer storage medium, which comprises computer instructions, when the computer instructions run on the electronic device, the electronic device is caused to execute the functions or steps executed by the mobile phone in the embodiment of the method.
The present application also provides a computer program product, which when run on a computer, causes the computer to perform the functions or steps performed by the mobile phone in the above-mentioned method embodiments.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units described above may be implemented either in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely a specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. The photographing method is characterized by being applied to electronic equipment, wherein the electronic equipment comprises preset ISP parameters of an image signal processor, and the image signal processor processes a first raw domain image by adopting the preset ISP parameters to generate a first image;
the photographing method comprises the following steps:
acquiring a first guide graph, and determining a first ISP parameter corresponding to the first guide graph, wherein the preset ISP parameter is different from the first ISP parameter; the first ISP parameter is obtained by processing the first guide map through a neural network;
setting an ISP parameter to the first ISP parameter;
the image signal processor processes the first raw domain image by using the first ISP parameter to generate a second image.
2. The method of claim 1, wherein the obtaining a first steering map, determining a first ISP parameter corresponding to the first steering map, and before the preset ISP parameter is different from the first ISP parameter, the method further comprises:
Starting a camera application, and displaying a preview interface, wherein the preview interface comprises a first control and a first preview image, and the image signal processor processes an image signal by adopting the preset ISP parameters to generate the first preview image;
and responding to clicking operation of the first control by a user, and displaying at least one third image, wherein the at least one third image comprises the first guide graph.
3. The method of claim 1, wherein the obtaining a first steering map, determining a first ISP parameter corresponding to the first steering map, and before the preset ISP parameter is different from the first ISP parameter, the method further comprises:
starting a camera application, and displaying a preview interface, wherein the preview interface comprises a first preview image, and the image signal processor processes an image signal by adopting the preset ISP parameters to generate the first preview image;
identifying individual objects in the first preview image;
displaying at least one fourth image according to the individual object in the first preview image, wherein the at least one fourth image comprises the first guide image; wherein each of the fourth images includes at least one of the individual objects.
4. The method of claim 1, wherein the obtaining a first steering map, determining a first ISP parameter corresponding to the first steering map, and before the preset ISP parameter is different from the first ISP parameter, the method further comprises:
responsive to receiving an operation to edit the first image, displaying an editing interface of the first image, the editing interface of the first image including a second control;
and responding to clicking operation of the second control by a user, and displaying at least one third image, wherein the at least one third image comprises the first guide graph.
5. A method according to claim 2 or 3, wherein the image signal processor processes the first raw domain image using the first ISP parameter, the method further comprising, prior to generating a second image:
the preview interface comprises a second preview image, and the image signal processor processes an image signal by adopting the first ISP parameters to generate the second preview image;
and triggering a shooting key to acquire the first raw domain image.
6. The method of claim 4, wherein the displaying the editing interface of the first image in response to receiving the operation to edit the first image, the editing interface of the first image including the second control is preceded by:
Acquiring a second raw domain image, generating a first raw domain image, and storing the first raw domain image, wherein the first raw domain image is the same as the second raw domain image;
and the image processor processes the second raw domain image by adopting the preset ISP parameters to generate the first image.
7. The method according to claim 2 or 4, characterized in that the method further comprises:
and scoring the at least one third image through an image scoring model, and displaying the at least one third image according to the scoring of the at least one third image.
8. A method according to claim 3, wherein said displaying at least one fourth image from individual objects in said preview image, said at least one fourth image including said first guide map is preceded by:
and scoring the at least one fourth image through an image scoring model, and displaying the at least one fourth image according to the scoring of the at least one fourth image.
9. The method of claim 6, wherein the image processor processes the second raw domain image using the preset ISP parameters, and wherein after generating the first image, the method further comprises:
Displaying prompt information, wherein the prompt information is used for prompting a user to trigger editing the first image within a preset time period;
and if the operation of editing the first image is not received within the preset time, deleting the first raw image.
10. An electronic device, comprising:
the camera is used for collecting images; an image signal processor for processing the raw domain image; the display screen is used for displaying an interface;
a memory, one or more processors; the one or more processors are coupled with the memory, the camera, the signal processor and the display screen;
and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions; the instructions, when executed by the electronic device, cause the electronic device to perform the photographing method of any of claims 1-9.
11. A computer readable storage medium comprising computer instructions which, when run on a computer, cause the computer to perform the photographing method of any of claims 1-9.
CN202011482589.5A 2020-12-15 2020-12-15 Photographing method and related equipment Active CN114640783B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011482589.5A CN114640783B (en) 2020-12-15 2020-12-15 Photographing method and related equipment
PCT/CN2021/135118 WO2022127611A1 (en) 2020-12-15 2021-12-02 Photographing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011482589.5A CN114640783B (en) 2020-12-15 2020-12-15 Photographing method and related equipment

Publications (2)

Publication Number Publication Date
CN114640783A CN114640783A (en) 2022-06-17
CN114640783B true CN114640783B (en) 2024-03-05

Family

ID=81944360

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011482589.5A Active CN114640783B (en) 2020-12-15 2020-12-15 Photographing method and related equipment

Country Status (2)

Country Link
CN (1) CN114640783B (en)
WO (1) WO2022127611A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116048379B (en) * 2022-06-30 2023-10-24 荣耀终端有限公司 Data recharging method and device
CN115442517B (en) * 2022-07-26 2023-07-25 荣耀终端有限公司 Image processing method, electronic device, and computer-readable storage medium
CN115589539B (en) * 2022-11-29 2023-05-09 荣耀终端有限公司 Image adjustment method, device and storage medium
CN117560552A (en) * 2024-01-10 2024-02-13 荣耀终端有限公司 Shooting control method, electronic device and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105516507A (en) * 2015-12-25 2016-04-20 联想(北京)有限公司 Information processing method and electronic equipment
CN106454097A (en) * 2016-10-29 2017-02-22 深圳市金立通信设备有限公司 Photographing method and photographing device
CN111885295A (en) * 2018-03-26 2020-11-03 华为技术有限公司 Shooting method, device and equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4325415B2 (en) * 2004-01-27 2009-09-02 株式会社ニコン An electronic camera having a finish setting function and a processing program for customizing the finish setting function of the electronic camera.
US20080091966A1 (en) * 2006-10-13 2008-04-17 Alpha Imaging Technology, R.O.C. Structure and method for booting an image signal processor
US8214766B1 (en) * 2008-07-09 2012-07-03 Adobe Systems Incorporated Method and system for preview control for image adjustment
US9019400B2 (en) * 2011-05-31 2015-04-28 Olympus Imaging Corp. Imaging apparatus, imaging method and computer-readable storage medium
KR20160146281A (en) * 2015-06-12 2016-12-21 삼성전자주식회사 Electronic apparatus and method for displaying image
US10755425B2 (en) * 2018-02-05 2020-08-25 Intel Corporation Automatic tuning of image signal processors using reference images in image processing environments
CN109788204A (en) * 2019-03-12 2019-05-21 维沃移动通信有限公司 Shoot processing method and terminal device
CN110399924B (en) * 2019-07-26 2021-09-07 北京小米移动软件有限公司 Image processing method, device and medium
CN110533611A (en) * 2019-08-26 2019-12-03 维沃移动通信有限公司 Image processing method and terminal device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105516507A (en) * 2015-12-25 2016-04-20 联想(北京)有限公司 Information processing method and electronic equipment
CN106454097A (en) * 2016-10-29 2017-02-22 深圳市金立通信设备有限公司 Photographing method and photographing device
CN111885295A (en) * 2018-03-26 2020-11-03 华为技术有限公司 Shooting method, device and equipment

Also Published As

Publication number Publication date
WO2022127611A1 (en) 2022-06-23
CN114640783A (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN114640783B (en) Photographing method and related equipment
US20230217098A1 (en) Shooting method, graphical interface, and related apparatus
CN113592887A (en) Video shooting method, electronic device and computer-readable storage medium
CN111508039A (en) Word processing method of ink screen and communication terminal
CN112262563A (en) Image processing method and electronic device
CN113963659A (en) Adjusting method of display equipment and display equipment
CN113110731B (en) Method and device for generating media content
CN117201930B (en) Photographing method and electronic equipment
CN116055699B (en) Image processing method and related electronic equipment
CN113395441A (en) Image color retention method and device
CN112799557B (en) Ink screen display control method, terminal and computer readable storage medium
CN115242983A (en) Photographing method, electronic device, computer program product, and readable storage medium
CN113891008A (en) Exposure intensity adjusting method and related equipment
CN113709355A (en) Sliding zoom shooting method and electronic equipment
CN117274106B (en) Photo restoration method, electronic equipment and related medium
CN116363017B (en) Image processing method and device
CN117170560B (en) Image transformation method, electronic equipment and storage medium
WO2022267781A1 (en) Modeling method and related electronic device, and storage medium
CN114245011B (en) Image processing method, user interface and electronic equipment
CN115225822B (en) Data processing method and electronic equipment
CN116205806B (en) Image enhancement method and electronic equipment
EP4277257A1 (en) Filming method and electronic device
CN114363507A (en) Image processing method and device
CN117119316A (en) Image processing method, electronic device, and readable storage medium
CN117956264A (en) Shooting method, electronic device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant