CN114640783A - Photographing method and related equipment - Google Patents

Photographing method and related equipment Download PDF

Info

Publication number
CN114640783A
CN114640783A CN202011482589.5A CN202011482589A CN114640783A CN 114640783 A CN114640783 A CN 114640783A CN 202011482589 A CN202011482589 A CN 202011482589A CN 114640783 A CN114640783 A CN 114640783A
Authority
CN
China
Prior art keywords
image
isp
mobile phone
preview
guide
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011482589.5A
Other languages
Chinese (zh)
Other versions
CN114640783B (en
Inventor
武小宇
吕飞帆
刘子鸾
张运超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011482589.5A priority Critical patent/CN114640783B/en
Priority to PCT/CN2021/135118 priority patent/WO2022127611A1/en
Publication of CN114640783A publication Critical patent/CN114640783A/en
Application granted granted Critical
Publication of CN114640783B publication Critical patent/CN114640783B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The embodiment of the application provides a photographing method and related equipment, and relates to the technical field of image processing. The electronic equipment can imitate the shooting style of the shot image of other equipment and generate the image of other shooting styles. The specific scheme is as follows: the photographing method is applied to electronic equipment, the electronic equipment comprises an image signal processor ISP parameter, and when the image signal processor processes a first raw domain image by adopting a preset ISP parameter, a first image can be generated. The electronic equipment acquires a first guide graph, determines a first ISP parameter corresponding to the first guide graph, and presets the ISP parameter different from the first ISP parameter; and setting the preset ISP parameters as the first ISP parameters. And when the image signal processor processes the first raw domain image by adopting the first ISP parameters, generating a second image.

Description

Photographing method and related equipment
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a photographing method and related equipment.
Background
The existing electronic devices all have a photographing function, and in the photographing process of the electronic devices, light reflected by a target object is projected on an image sensor through a lens, so that the image sensor generates an original raw domain image (or called a raw image). The Image sensor transmits the raw domain Image to an Image Signal Processing (ISP), and the ISP can convert the raw domain Image into an Image in a format of Portable Network Graphics (JPG) or the like. The ISP may include a plurality of algorithm modules, and each algorithm module may implement a different function, so that the ISP may convert the raw domain image into an image. For example, the ISP includes a rwnf module having a function of removing noise, and a Color Correction (CC) module having a function of correcting Color errors.
If the parameters of each module in the ISP are different, the processing effect of the module on the image is also different, so that the images generated by the ISP are slightly different. For example, when the target objects photographed by two electronic devices (taking the electronic device 1 and the electronic device 2 as an example) are the same, the parameters of the CC modules used by the two electronic devices ISP are different, the image 1 generated by the electronic device 1 and the image 2 generated by the electronic device are slightly different, and the colors (for example, blue) displayed by the image 1 and the image 2 are slightly different. In this case, we say that the two images have different shooting styles. The parameters in the ISP modules of different manufacturers of electronic devices are different, so that the shooting styles of the shot images obtained by the electronic devices of different manufacturers are different, and even the shooting styles of the shot images obtained by the electronic devices of different models designed by the same manufacturer are different.
It is understood that, for any electronic device, the electronic device is in the photographing mode, and the photographing style of the generated photographed image is fixed. That is, in the process of generating the photographed image, the electronic device processes the image by the ISP in the electronic device to generate the photographed image corresponding to the photographing style. In this case, it is difficult for the electronic apparatus to generate a captured image of another capturing style.
Disclosure of Invention
The application provides a shooting method and related equipment, so that electronic equipment simulates the shooting style of a shot image of other equipment and generates images of other shooting styles.
In order to achieve the technical purpose, the following technical scheme is adopted in the application:
in a first aspect, the present application provides a photographing method, which may be applied to an electronic device. The electronic equipment comprises a preset ISP parameter, and the image signal processor processes the first raw domain image by adopting the preset ISP parameter to generate a first image. The photographing method may include: the electronic equipment acquires the first guide graph, determines a first ISP parameter corresponding to the first guide graph, and the preset ISP parameter is different from the first ISP parameter. And if the ISP parameter in the electronic device is set as the first ISP parameter, the image processor may process the first raw domain image by using the first ISP parameter to generate the second image.
It is understood that, in the process of generating the first image, the image signal processor may call the preset ISP parameter to process the first raw domain image to generate the first image. When the preset ISP parameter in the electronic device is set as the first ISP parameter, the image signal processor may call the first ISP parameter to process the first raw domain image to generate the second image. That is, if the ISP parameters called by the image signal processor are different, the generated image is different.
Generally, if the ISP parameters in the electronic device are preset, the images generated by different electronic devices are different. If the first equipment presets the first ISP parameters, the first equipment can generate a first guide graph; the electronic device may generate the first image if the electronic device presets the ISP parameter. From the visual aspect of the user, the shooting style of the first image and the shooting style of the first guide graph can be intuitively found out to be different. Therefore, when the electronic device determines the first ISP parameter corresponding to the first guide map, and processes the raw domain image by using the first ISP parameter to generate the second image. The electronic equipment generates images of other shooting styles, such as the shooting style of the first guide graph, with the aim of simulating the shooting style of the first equipment.
In a possible design manner of the first aspect, before the electronic device obtains the first guidance diagram and determines the first ISP parameter corresponding to the first guidance diagram. The method further comprises the step that the electronic equipment can also start a camera application and display a preview interface, wherein the preview interface comprises a first control and a first preview image. The image signal processor processes the image signal by adopting preset ISP parameters to generate a first preview image. And responding to the clicking operation of the user on the first control, and displaying at least one third image by the electronic equipment, wherein the at least one third image comprises the first guide graph.
That is to say, after the electronic device starts the camera application, the preview interface of the camera application includes a first control, and the first control is used for calling the third image. The third image is an image generated by using other ISP parameters (i.e. non-preset ISP parameters), and the third image includes the first guide map.
In another possible design manner of the first aspect, before the electronic device obtains the first guidance diagram and determines the first ISP parameter corresponding to the first guidance diagram. The method further includes that the electronic device can launch a camera application and display a preview interface, the preview interface including a first preview image. The image signal processor in the electronic equipment processes the image signal by adopting a preset ISP parameter to generate a first preview image. The electronic device can identify an individual object in the first preview image and display at least one fourth image based on the individual object in the first image, the at least one fourth image including the first guide map. Wherein, each fourth image at least comprises one individual object.
As can be appreciated, the electronic device launches the camera application, and the electronic device can automatically identify the first preview image in the current preview interface. The electronic device displays a fourth image including the individual object according to the individual object in the first preview image. Since the fourth image is a different image than the first image, the user may view images of individual objects generated by other devices through the electronic device.
In another possible design manner of the first aspect, before the electronic device obtains the first guidance diagram and determines the first ISP parameter corresponding to the first guidance diagram. The method further includes, in response to receiving an operation to edit the first image, the electronic device displaying an editing interface for the first image, the editing interface for the first image including a second control. And responding to the clicking operation of the user on the second control, and displaying at least one third image by the electronic equipment, wherein the at least one third image comprises the first guide graph.
And if the electronic equipment is in the image editing state of the first image, the image editing interface displayed by the electronic equipment comprises a second control. And responding to the clicking operation of the user on the second control, and displaying a plurality of third images by the electronic equipment, wherein the third images comprise the first guide graph.
In another possible design manner of the first aspect, the image signal processor processes the first raw domain image by using the first ISP parameter before generating the second image. The method further comprises the step that the electronic equipment can display a preview interface, the preview interface comprises a second preview image, and the image signal processor processes the image signal by adopting the first ISP parameters to generate the second preview image. And triggering a shooting key to acquire a first raw field image.
The electronic equipment receives the selection operation of the user on the first guide graph and displays the preview image. The preset ISP parameter is set as the first ISP parameter by the electronic equipment, so that the image in the preview interface of the electronic equipment is the second preview image. The second preview image is different from the first preview image.
In another possible design manner of the first aspect, before the operation of editing the first image is received and the editing interface of the first image is displayed, before the editing interface of the first image includes the second control, the electronic device may further obtain a second raw domain image, generate a first raw domain image, and store the first raw domain image, where the first raw domain image is the same as the second raw domain image. The image processor may process the second raw domain image using preset ISP parameters to generate the first image.
The electronic equipment sets a preset ISP parameter as a first preset parameter, and processes the raw domain image by adopting the first ISP parameter to generate a second image. Therefore, before the electronic device generates the first image, the electronic device stores the first raw domain image corresponding to the first image, so that when the first image is edited, the electronic device can provide a function of modifying the image shooting style.
In another possible design of the first aspect, the first ISP parameter is obtained by processing the first pilot diagram through a neural network.
It is understood that the electronic device may also obtain the first ISP parameter corresponding to the first guidance diagram in other manners, which is only an example here.
In another possible design manner of the first aspect, the electronic device may score the at least one third image through the image scoring model, and display the at least one third image according to the scoring order of the at least one third image.
In another possible design manner of the first aspect, the electronic device may further score the at least one fourth image through the image scoring model, and display the at least one fourth image according to the score of the at least one fourth image.
The image scoring model scores each image, so that the electronic equipment can display a plurality of images according to the scores of the images, and display basis is provided for displaying the plurality of images.
In another possible design manner of the first aspect, after the image processor processes the second raw domain image by using the preset ISP parameter and generates the first image, the electronic device may further display a prompt message, where the prompt message may be used to prompt the user to trigger editing of the first image within a preset time period. And if the operation of editing the first image is not received within the preset duration, deleting the first raw field image.
The raw domain image occupies a larger memory space, and the electronic equipment prompts a user to use the raw domain image within the preset time length, so that the memory occupation amount of the electronic equipment can be reduced after the preset time length, and the operation of the electronic equipment is ensured.
In a second aspect, the present application further provides a photographing apparatus, which is included in an electronic device. The device has a function of realizing the behavior of the electronic equipment in the first aspect and any one of the possible design manners thereof, so that the electronic equipment executes the photographing method in the first aspect and any one of the possible design manners thereof. This function may be implemented by hardware, or by software that executes a response by hardware. The hardware or software includes at least one module or unit corresponding to the above-described functions. For example, the photographing apparatus may include a display unit, a determination unit, a generation unit, and the like.
In a third aspect, the present application further provides an electronic device, including: the camera is used for collecting images; an image signal processor for processing raw domain images; the display screen is used for displaying an interface; a memory, one or more processors; one or more processors are coupled with the memory, the camera, the signal processor and the display screen; and one or more computer programs. Wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions; the instructions, when executed by the electronic device, cause the electronic device to perform the method of photographing of the first aspect and any of its possible designs.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, which includes computer instructions, when the computer instructions are executed on an electronic device, cause the electronic device to perform the photographing method in the first aspect and any possible design thereof.
In a fifth aspect, the present application provides a computer program product, which when run on a computer, causes the computer to execute the photographing method executed by the electronic device in the first aspect and any possible design thereof.
In a sixth aspect, an embodiment of the present application provides a chip system, where the chip system is applied to an electronic device. The chip system includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected through a line; the interface circuit is used for receiving signals from a memory of the electronic equipment and sending the signals to the processor, and the signals comprise computer instructions stored in the memory; the computer instructions, when executed by the processor, cause the electronic device to perform the method of photographing of the first aspect and any of its possible designs, as described above.
It is to be understood that the beneficial effects that can be achieved by the photographing device of the second aspect, the electronic device of the third aspect, the computer-readable storage medium of the fourth aspect, the computer program product of the fifth aspect, and the chip system of the sixth aspect provided by the present application can refer to the beneficial effects in the first aspect and any one of the possible design manners thereof, and are not described herein again.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an ISP module according to an embodiment of the present application;
fig. 3 is a schematic diagram of a software architecture of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic view of a photographing interface of a mobile phone in a photographing mode according to an embodiment of the present application;
fig. 5 is a schematic interface diagram of a mobile phone in a photographing mode according to an embodiment of the present application;
fig. 6 is a schematic interface diagram of an image editing by a mobile phone according to an embodiment of the present disclosure;
fig. 7A is a schematic view of an interface for editing an image by using another mobile phone according to the embodiment of the present application;
fig. 7B is a schematic diagram of a second captured image according to the embodiment of the present application;
fig. 7C is a schematic diagram of a first captured image according to an embodiment of the present disclosure;
fig. 8 is a flowchart of a photographing method according to an embodiment of the present application;
fig. 9 is a schematic view of a display interface of a mobile phone in another photographing mode according to the embodiment of the application;
fig. 10 is a schematic view of an application scenario of a photographing method according to an embodiment of the present application;
fig. 11 is a flowchart of a photographing method according to an embodiment of the present application.
Detailed Description
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
Generally, after the ISP parameters of the electronic device are determined, the shooting style of the image generated by the electronic device is determined. That is, the shooting style of the image generated by the electronic device is determined by the ISP parameters. If the user uses the electronic equipment to generate the image, the user has the requirement of shooting to obtain the images of other shooting styles. In order to realize the generation of the image of the other shooting style by the electronic equipment, the electronic equipment can simulate the shooting style of the guide map according to the acquired guide map (namely the shooting image generated when the other equipment shoots the target object), and generate the image with the same shooting style as the guide map.
In a first implementation, an electronic device constructs an ISP-based Convolutional Neural Network (CNN) end-to-end model. The electronic equipment acquires the guide graph, inputs any guide graph into the CNN end-to-end model, and operates the CNN model to obtain the ISP parameters of the guide graph. The guide map ISP parameters are ISP parameters corresponding to the electronic equipment for generating the guide map. And if the electronic equipment updates the ISP parameters to the guide map ISP parameters, the shooting style of the image generated by the electronic equipment is the same as that of the guide map.
In this implementation, the ISP parameters of the pilot graph are obtained when the CNN end-to-end model is in an offline state. The electronic equipment runs the camera application and is in a photographing mode, and after the guide map ISP parameters are determined, the ISP parameters of the electronic equipment cannot be adjusted in real time. Therefore, the electronic device needs to perform multiple iterations of ISP parameters to change the parameters in the ISP module into ISP parameters of the device for generating the guide graph, and the parameter iteration process makes debugging time of the ISP parameters longer.
In a second implementation manner, the electronic device acquires the guide graph, learns the shooting style of the guide graph, and acquires the shooting style characteristics of the guide graph. Such as color features, luminance features, image grayscale features, etc., in the pilot map. In the process of generating the image by the electronic equipment, the ISP builds an end-to-end model based on the hardware-in-the-loop, and processes the shot image by using an algorithm, so that the shooting style characteristic of the image is the same as that of the guide graph. Thus, the purpose of simulating the shooting style of the guide graph can be realized. The same features as the first implementation are provided: this implementation is to obtain the shooting style characteristics of the guide graph while offline. And (4) utilizing algorithm processing to enable the image obtained by shooting by the electronic equipment to be the same as the shooting style characteristic of the guide graph. That is, the ISP parameters do not need to be adjusted in the process of generating the image. Therefore, when the electronic device simulates the shooting style of the guide map, a quality evaluation algorithm needs to be created to determine whether the shooting style of the image obtained by the ISP is similar to the shooting style of the guide map. The quality evaluation algorithm is complex and the setting difficulty is high.
In a third implementation manner, the electronic device acquires the guide graph, determines a shooting style of the guide graph, and determines a shooting style characteristic of the guide graph by using a countermeasure network (GAN). And fusing the image generated by the electronic equipment and the shooting style of the guide graph to achieve the purpose of simulating the shooting style of the guide graph.
The processing mode is processing in the aspect of image imaging display, and parameters of modules in the ISP are not modified. When the shooting style of the image shot by the electronic equipment is fused with that of the guide graph, the phenomenon that an artifact exists on the image generated by the electronic equipment cannot be avoided. Herein, the artifact broadly refers to image distortion. For example, shadows of motion or movement that the target object does not have appear in the image.
The embodiment of the application provides a photographing method which can be applied to electronic equipment. The electronic device may learn a shooting style of the directed graph to determine a parameter of a module in an ISP of the device generating the directed graph (or referred to as a first ISP parameter), the electronic device modifying the parameter in the ISP module to the first ISP parameter. Therefore, when the electronic equipment generates the shot image, the shooting style of the shot image obtained by the electronic equipment is the same as that of the guide map, and the aim of simulating the shooting style of the guide map is fulfilled.
It should be noted that the electronic device in the embodiment of the present application may be a mobile phone with an image capturing function, a digital camera, a video camera, a sports camera (GoPro), a smart watch, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an in-vehicle device, a car recorder, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, and the like. In addition, the electronic device may be a vehicle-mounted computer, a Personal Digital Assistant (PDA), an Augmented Reality (AR) \ Virtual Reality (VR) device, or the like, and the embodiment of the present application is not particularly limited to a specific form of the electronic device.
Embodiments of the present application will be described below with reference to the drawings.
Please refer to fig. 1, which is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 1, the electronic device 100 may include a processor 110, an internal memory 121, a key 130, a charging management module 140, a power management module 141, a battery 142, a sensor module 180, a camera module 150, a display screen 160, and the like. Wherein the sensor module 180 may include: a gyroscope sensor, a direction sensor, an acceleration sensor, a distance sensor, a touch sensor, an ambient light sensor, etc.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an ISP, a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, a neural Network Processor (NPU), and the like. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), and/or a general-purpose input/output (GPIO) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, the processor 110 may include multiple sets of I2C buses, and the processor 110 may be coupled to the camera 150 via an I2C bus.
The MIPI interface may be used to connect the processor 110 with peripheral devices such as the display screen 160, the camera 150, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 150 communicate over a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 160 communicate via the DSI interface to implement the display function of the electronic device 100.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a Universal Flash Storage (UFS), and the like.
The electronic device 100 implements display functions via the GPU, the display screen 160, and the application processor, among others. The GPU is a microprocessor for image processing, and is connected to the display screen 160 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 160 is used to display images, video, and the like. The display screen 160 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 160, with N being a positive integer greater than 1.
The electronic device 100 may implement a camera function via the ISP, NPU, camera module 150, video codec, GPU, display screen 160, and application processor, etc.
The keys 130 may include a power-on key, a volume key, a zoom key, a shoot key, etc. The key 130 may be a mechanical key or a touch key.
The camera module 150 is used for capturing still images or videos. The camera module 150 includes a lens 151 and an image sensor 152, and the lens 151 may be composed of a plurality of lenses. Light reflected by the target object is transmitted through the lens 151 to an image sensor 152 (i.e., a photosensitive element), and the image sensor 152 may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The image sensor 152 may generate a raw domain image of the target object and transmit the raw domain image to the ISP. The ISP is used for generating image signals by adopting raw domain images, the ISP transmits the image signals to the DSP for processing, and the DSP converts the digital image signals into image signals in standard RGB, YUV and other formats. The display screen 160 is used for displaying image signals in RGB, YUV, and the like. In some embodiments, the electronic device 100 may include 1 or N camera modules 150, where N is a positive integer greater than 1.
The ISP is used to process the data fed back by the image sensor 152. For example, when taking a picture, the shutter is opened, light is transmitted to the image sensor 152 in the camera module 150 through the lens, the optical signal is converted into an electrical signal to form a raw image in Bayer format, and the raw image in Bayer format is transmitted to the ISP for processing by the camera photosensitive element and converted into an image visible to the naked eye. The ISP can process the raw image in the Bayer format using preset parameters, and therefore, when the ISP parameters in the electronic device 100 are determined, the shooting style of the image generated by the electronic device 100 is also determined. In some embodiments, the ISP may be provided in camera 150. In other embodiments, the ISP may be located in the processor 110.
In some embodiments, the ISP may be treated as a system-on-a-chip, and the ISP runs various algorithmic programs for processing the image signals in real time. For example, the ISP may perform algorithm optimization on the noise, brightness and skin color of the person in the image, and the ISP may also perform optimization on parameters such as exposure and color temperature of the shooting scene. The ISP receives the raw domain image transmitted by the image sensor, and the ISP can convert the raw domain image into a macroscopic image. The mobile phone is provided with an ISP database, and the ISP database comprises parameters of various algorithm programs operated by the ISP. And calling the ISP database when the ISP runs, so that the ISP runs various algorithms according to the parameters in the ISP database, and converting the raw domain image into an image in a preset format.
It can be understood that, the ISP needs to have multiple functions according to the fact that the ISP converts the raw domain image into the image with the preset format, and if the ISP is divided into modules according to the functions of the ISP, the ISP may include multiple algorithm modules.
Fig. 2 is a schematic diagram illustrating an algorithm module structure of an ISP. As shown in fig. 2, the ISP may include modules such as Black Level Correction (BLC), Rawnf, CC, Dynamic Range Compression (DRC), GAMMA (GAMMA), sharpening (sharp), and three-dimensional color lookup Table (3D Look-Up Table, 3D LUT).
The Rawnf module is used for carrying out noise reduction processing on the raw domain image by adopting a Non-Local mean algorithm (NLM). The NLM can process the pixel points with noise in the raw domain image, and the gray value of one pixel point is obtained through weighted average of the gray values of the pixel points around the pixel point, so that the purpose of image denoising is achieved. When the NLM algorithm is adopted to process raw domain noise in the raw domain image, the running time of the NLM algorithm is inversely proportional to the noise reduction effect of the raw domain image. Namely, the longer the running time of the Rawnf module is, the better the noise reduction effect of the raw domain image processed by the Rawnf module is.
CC may be used to correct color errors in the image. The color correction process is as follows: and comparing the raw domain image obtained by the ISP with a preset standard image to obtain a correction matrix. The correction matrix comprises a difference value of colors in pixels of a raw domain image and colors in a preset standard image, and the correction matrix is adopted to correct the colors in the Bayer format image generated by the image sensor, so that the colors in the shot image generated by the mobile phone are closer to the real colors of the target object.
It is noted that correcting colors in an image is accompanied by adjusting the saturation of colors in the image. The saturation of the color refers to the purity of the color in the image, and the higher the purity of the color is, the more vivid the color presented by the image is; the lower the purity of the color, the darker the color the image appears.
The dynamic range in DRC refers to the ratio of the gray levels of the pixels in the brightest part to the pixels in the darkest part in a photographed image generated by a mobile phone. When a strong light source (such as a lighting device which is directly opposite to sunlight, strong reflection and high brightness) exists in a scene photographed by the mobile phone, a highlight area and a dim area exist in the scene photographed by the mobile phone at the same time, the highlight area may have an overexposure phenomenon, and the dim area may have an underexposure phenomenon. The highlight area appears as white due to overexposure in the shot image generated by the mobile phone, and the dark area appears as black due to underexposure, so that the image quality is influenced. The DRC can solve the problems of overexposure of the highlight area and underexposure of the dim light area, so that the highlight area and the dim light area in the shot image generated by the mobile phone both present corresponding images.
The GAMMA correction module is used for compensating the preview image or the shot image displayed by the display when GAMMA coding is carried out on the image, so that people can perceive light or brightness in the preview image or the shot image. It will be appreciated that the gamma correction module may maximize the use of gray scale data bits or bandwidth in the image data representing pixels in the image. When the display in the mobile phone displays a preview image or captures an image, the brightness value generated by the transformation function when the display displays the brightness area in the image is proportional to the energy (called gamma) of the brightness amplitude in the image. In this case, the range of luminance values in the preview image or the captured image displayed by the display is expanded.
The Sharp module is used for carrying out sharpening processing on the image generated by the mobile phone and restoring relevant details in the image. Wherein, after the image sensor collects the original image, the image is input into the ISP. Various noises may be included in the image of the CMOS input, including random noise, quantization noise, fixed pattern noise, and the like. The ISP module needs to perform noise reduction processing on the original image, and in the process of noise reduction processing, details in the image are eliminated, so that the image is not clear enough. In this case, in order to avoid loss of details in the image during the noise reduction process, the image is sharpened to improve the display effect of the image.
The 3D LUT module is a three-dimensional color look-up table in the image, where the LUT can convert input values for each set of RGB into output values. For example, when the LUT receives a signal, it receives a table lookup address, and obtains and outputs the content corresponding to the signal by table lookup, and the display can display and output the content. The 3D LUT can accurately describe all colors, and the 3D LUT can be used to deal with the problem of the non-linear property of the colors when displaying images, and can also be used to deal with the problem of large variations in the colors, and the like. Thus, the 3D LUT can be used for accurate color calibration work. Color calibration problems in the display of preview or captured images, such as gamma values of colors, color ranges, color range tracking errors, correction of non-linear properties in color displays, color crosstalk, color saturation, etc., can be addressed using a 3D LUT.
It is understood that fig. 2 in the embodiment of the present application only shows a part of modules in the ISP. The above examples do not constitute a specific limitation on the modules in the ISP, and more or fewer modules may be included in the ISP.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of the electronic device 100.
Fig. 3 is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 3, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 3, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary work flows of software and hardware of the electronic device 100 in connection with a photographing scene.
Taking the electronic device as a mobile phone as an example, when the mobile phone runs a camera application, a lens in a camera module of the mobile phone is in an open state. The mobile phone displays an interface of the camera application, a shooting key is triggered, the touch sensor receives touch operation, and hardware interruption of the camera is sent to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. The touch operation is an operation of touching a shooting key, and the control corresponding to the operation is a control of the shooting key in the interface of the camera application. And the camera application calls an interface of the application framework layer, starts a camera drive through the kernel layer and captures a static image.
The implementation methods in the following embodiments can be implemented in an electronic device having the above hardware structure.
The photographing method provided by the embodiment of the application can be applied to the following application scenes.
Scene one: the electronic device generates an image.
A user may generate an image using an electronic device. Specifically, taking the electronic device as a mobile phone as an example, a camera application (which may also be simply referred to as a camera) is installed in the mobile phone. For example, when the mobile phone displays a main interface as shown in fig. 4 (a), application icons of multiple applications, such as a camera application 401, an address book application 402, a phone application 403, an information application 404, and the like, may be included in the main interface. When the user clicks an icon of the camera application 401 in the main interface shown in fig. 4 (a), the mobile phone displays a preview interface as shown in fig. 4 (b), which includes a preview image 405. At this time, the preview image 405 is an image captured by a camera of the mobile phone. The image displayed on the preview interface of the mobile phone has the same shooting style as the image generated by the mobile phone. Therefore, the user can know the shooting style of the image generated by the mobile phone by looking at the interface of the mobile phone. If the user wants to change the shooting style of the image generated by the mobile phone, the mobile phone can operate the shooting method provided by the embodiment of the application, so that the mobile phone changes the shooting style of the generated image.
The mobile phone may display the shooting preview interface shown in (a) in fig. 5, and the user views the mobile phone interface, and may find that the shooting preview interface shown in (a) in fig. 5 includes: preview image 405 and mimic capture style 406 controls. As the name implies, the effect of the mimic photography style 406 is that the phone can simulate the photography style of other images, and if the user wishes to change the photography style of the images generated by the phone, the user can click on the mimic photography style 406 control. The mobile phone receives the operation of the user on the imitation shooting style 406 control, and displays a shooting preview interface as shown in (b) in fig. 5, wherein the shooting interface comprises a first guide graph 407, a second guide graph 408 and a third guide graph 409. It is to be understood that, when the shooting preview interface shown in fig. 5 (b) includes a plurality of guide graphs (i.e., images different from the shooting style of the mobile phone generated image), the user can select the first guide graph 407 if the user likes the shooting style of the first guide graph 407 when viewing the shooting preview interface shown in fig. 5 (b). The mobile phone receives a selection operation of the user on the first guide map 407, the mobile phone may determine a shooting style simulating the first guide map, and the mobile phone determines a first ISP parameter corresponding to the first guide map 407 (i.e., an ISP parameter in the electronic device generating the first guide map). In this way, the mobile phone can set the ISP parameter to the first ISP parameter and display a photographing preview interface as shown in fig. 5 (c), which includes a preview image 410. Since the ISP parameter in the mobile phone is the first ISP parameter, the shooting style of the preview image 410 is the same as the shooting style of the first guide map 407.
For example, if the user is satisfied with the image in the current mobile phone photographing preview interface, the user may click a photographing key 411 in the photographing preview interface as shown in fig. 5 (c). The mobile phone receives an operation of triggering the photographing key 411, and the mobile phone generates an image.
As another example, if the user is not satisfied with the image in the current cell phone capture preview interface, the user may click on the simulated capture style 406 control in the capture preview interface as shown in fig. 5 (c). That is, the user is dissatisfied with the shooting style of the simulated first guide map, and wants to change the shooting style of the image in the mobile phone shooting preview interface again. In this case, the cell phone receives the user's operation of the control of the simulated photographing style 406, and the cell phone displays a photographing preview interface as shown in fig. 5 (d), which includes the original drawing 412, the second guide drawing 408, and the third guide drawing 409. The user views the shooting preview interface shown in fig. 5 (d), and reselects any guide graph, so that the mobile phone simulates the shooting style of the guide graph.
The original image 412 is an image generated by the mobile phone before the ISP parameter of the mobile phone is not adjusted (or may be understood as the initial ISP parameter of the mobile phone), that is, the preview image 405 shown in fig. 4 (b).
Scene two: the electronic equipment edits the shot image.
The user uses the mobile phone to generate the image, and when the user has the requirement of editing the image, the user can use the mobile phone to edit the image. Specifically, when the electronic device is a mobile phone and the mobile phone displays a shooting preview interface as shown in fig. 4 (b), a shooting key of the mobile phone is activated, and the mobile phone generates an image 601.
Take the example where the user wants to edit the image 601.
When the user views the image 601 captured by the mobile phone, the mobile phone displays an image display interface as shown in fig. 6 (a). The image display interface shown in fig. 6 (a) includes an image 601, a sharing control 61, a favorite control 62, an editing control 63, a deletion control 64, and a more control 65.
When the user wants to edit the preview image 601, the user clicks the edit control 63. The mobile phone receives a trigger operation of the user on the editing control 63, and displays an image editing interface as shown in (b) in fig. 6. The image editing interface includes, among other things, a simulation shooting style control 66, an exposure control 67, and a contrast control 68. The image editing interface shown in fig. 6 (b) further includes names of the simulated shooting style controls 66, wherein when any of the function controls is at the position of the simulated shooting style control 66, the name of the corresponding function control can be displayed.
During editing of the image 601 by the user using the cell phone, if the user wishes to change the shooting style of the image 601, the user may trigger the analog shooting style control 66 to change the shooting style of the image 601. The mobile phone receives the operation of the user on the simulated shooting style control 66, and displays a shooting style editing interface shown in fig. 6 (c). The photography style editing interface includes a first directed graph 407, a second directed graph 408, and a third directed graph 409. It is understood that the guide map shown in fig. 6 (c) is merely an example, and the mobile phone may display other guide maps when the mobile phone receives a sliding operation of the user in the guide map display area.
It should be noted that, when the user views the guide graph through the interface shown in fig. 6 (c), any guide graph can be selected. In this way, the cell phone can convert the photographing style of the image 601 into the photographing style of the guide map, so that the cell phone changes the photographing style of the image 601. Here, the user selects the first guide map 407 as an example. The mobile phone displays an image editing interface as shown in fig. 6 (d), the image editing interface includes an image 602, and a shooting style of the image 602 is the same as that of the first guide map 407.
For example, a mobile phone generates a shot image, and a user views the shot image 601 of the mobile phone through an album application. The mobile phone receives an operation of running the camera application, and the mobile phone runs the camera application and displays the image 601. If the cell phone receives a user operation of the simulated photography style control 66, the cell phone may display a guide map in the album. That is, the camera application has the right to access the album application so that the cell phone can display the guide map in the album.
Specifically, the mobile phone determines a first ISP parameter corresponding to the first guide map, and sets the ISP parameter of the mobile phone as the first ISP parameter. The mobile phone can input the raw domain image corresponding to the image 601 into the ISP to obtain the image 602. It is worth mentioning that since the mobile phone is the image 601 generated based on the raw domain image transmitted by the image sensor, if the shooting style of the image is changed, the ISP is required to process the raw domain image again. Therefore, when generating the image 601, the raw domain image corresponding to the image 601 is stored, so that the mobile phone can edit the shooting style of the image 601 when editing the image.
Illustratively, if the user is satisfied with the capture style of the image 602, the user does not trigger the analog capture style control 66. In this case, the user can complete editing of the image capturing style.
As yet another example, if the user is not satisfied with the capture style of the image 602, the user may again trigger the simulated capture style control 66 as shown in fig. 6 (d). When the cell phone receives a user operation on the simulated shooting style control 66 as in (d) of fig. 6, the cell phone may display an image editing interface as shown in fig. 7A. The image editing interface shown in fig. 7A includes an image 601, a second directed graph 407, and a third directed graph 408. The user views the image editing interface shown in fig. 7A, and reapplies and selects any one of the guide maps, so that the mobile phone simulates the shooting style of the guide map, and generates an image corresponding to the shooting style.
Generally, when the electronic device generates the first captured image, the image sensor acquires a raw domain image, the image sensor transmits the raw domain image to the ISP, and the raw domain image is processed by the ISP to generate the first captured image. That is, during the process of generating the first captured image, the electronic device does not store the raw domain image of the first captured image. When the electronic device executes the photographing method provided by the embodiment of the application, the electronic device acquires a raw domain image, copies the raw domain image to obtain a first raw domain image and a second raw domain image, and inputs the first raw domain image to an ISP to obtain a first photographed image (such as the image 601). The electronic device stores the second raw field image so that when the electronic device receives an editing operation mimicking a shooting style of a directed graph, the electronic device processes the second raw field image to generate a second captured image (such as the image 602 described above). It should be noted that the memory space occupied by the raw domain image is large, and in order to ensure the smoothness of the operation of the electronic device, after the electronic device generates the first captured image, the electronic device may delete the second raw domain image within a preset time period if the electronic device does not receive the image editing operation simulating the capture style.
The preset time period may be 1 minute, 3 minutes, 5 minutes, or the like, and is not particularly limited. It will be appreciated that the preset duration is the time provided to the user to consider the processing of the image. In some implementations, after the raw domain image stored in the mobile phone, a prompt message may be displayed, where the prompt message is used to prompt the user that the image capturing style can be edited within a preset time (e.g., within 3 minutes), and the image capturing style cannot be edited after the preset time. For example, the prompt message may be: the image shooting style can be adjusted within 3 minutes, and the image shooting style cannot be adjusted when the image shooting style is out of date.
Illustratively, when the ISP parameter in the mobile phone is the preset ISP parameter, the mobile phone generates a second captured image, and the second captured image is an image of the first capture style. The first guide image is an image of a second photographing style, and the first photographed image is an image of the second photographing style. Referring to fig. 7B, a second photographed image (i.e., an image of the first photographing style) generated for the mobile phone is shown. Wherein the second captured image includes target objects: sky, clouds, trees, and paths. As shown in fig. 7B, the brightness of the sky and the white cloud is high, and the brightness of the tree and the small-diameter area is low, so that the brightness difference between the sky part (sky and white cloud) and the ground object (i.e., tree and path) presented by the image is large, and the brightness of the shooting scene cannot be truly reflected by the picture shown in fig. 7B.
When the High-Dynamic Range (HDR) technology is used for processing an image, a larger exposure Dynamic Range (i.e., a larger light-dark difference) is used for imaging, so that the image generated by the electronic device can truly reflect the brightness in the shooting environment. Therefore, HDR imaging can truly reflect the luminance of the brightest area in an image and the luminance in the darkest area. That is, the HDR in the mobile phone generating the image shown in fig. 7B has poor processing capability, i.e., the luminance of the brightest area and the luminance of the darkest area in the captured scene cannot be reflected truly in fig. 7B.
In this case, the first guide map with better HDR processing effect is selected, and the mobile phone may process the raw domain image corresponding to fig. 7B by using the first ISP parameter corresponding to the first guide map, so that the display effect of fig. 7B may be improved.
For example, the mobile phone generates a second shot image and stores a raw domain image corresponding to the second shot image. The first guide map is an HDR image, and the first guide map corresponds to the first ISP parameter. The ISP parameter in the mobile phone is set as the first ISP parameter, and the ISP processes the raw domain image corresponding to fig. 7B to obtain an image (first captured image) as shown in fig. 7C. As shown in fig. 7C, the brightness difference of the sky portion (sky and white clouds) and the ground object (i.e., trees and paths) becomes small. The image shown in fig. 7C can provide more dynamic range and image details than the image shown in fig. 7B, and the second captured image can better reflect the visual effect in the real environment seen by the human eye.
The following will explain the principle of acquiring ISP parameters corresponding to a guide map in the photographing method provided in the embodiment of the present application.
Continuing with the example where the electronic device is a cell phone, the cell phone may determine a first ISP parameter corresponding to a directed graph (e.g., the first directed graph 407). If the parameter of the ISP in the mobile phone is set to be the first ISP parameter, the shooting style of the image shot by the mobile phone is the same as that of the guide map. In some implementations, the handset can determine, via the neural network, a first ISP parameter corresponding to the first guidance map.
Illustratively, the first ISP parameter corresponding to the directed graph is determined by a neural network (otherwise referred to as an ISP parameter estimation model). For example, any guide graph is input into the ISP parameter estimation model, the ISP parameter estimation model is operated, and the ISP parameter estimation model outputs the ISP parameters in the device that generates the guide graph. In some implementations, an ISP parameter estimation model may be preset in the mobile phone, and the mobile phone inputs the guide map into the ISP parameter estimation model to obtain an ISP parameter in the device that generates the guide map. In other implementations, an ISP parameter estimation model is set in a remote server (or cloud equipment), and a mobile phone obtains a direction graph and transmits the direction graph to the remote server. The remote server takes the guide graph as the input of the ISP parameter estimation model, operates the ISP parameter estimation model to obtain the ISP parameters in the equipment for generating the guide graph, and sends the ISP parameters to the mobile phone.
Specifically, the ISP parameter estimation model may be composed of a plurality of neural units, which may be functions, that convert an input signal into an output signal. For example, the ISP parameter estimation model may be structured as a neural network formed by a plurality of single neural units joined together. In such a configuration, the output of one neural unit may be the input of another neural unit. The ISP parameter estimation model may include multiple convolutional layers, and the output signal of one neural unit may be used as the input signal of the next convolutional layer. The input of each neural unit may be connected to the local receive domain of the previous convolutional layer to extract the features of the local receive domain, which may be a region composed of several neural units.
It should be noted that the ISP parameter estimation model is obtained after training and learning. The neural unit comprises a plurality of weight matrixes, and the process of training the weight matrixes in the neural unit is to train the ISP parameter estimation model.
Illustratively, the training data for training the ISP parameter estimation model in the embodiment of the present application includes a guide graph and ISP parameters (referred to as first ISP parameters) of a device generating the guide graph. The guide map can be generated by shooting a target object by the electronic equipment, and the image is not subjected to image editing operation (such as operations of color mixing, synthesis, shading modification, chroma and chroma modification and the like). The ISP parameter estimation model receives the guide graph, so that the ISP parameters (called second ISP parameters) of the equipment generating the guide graph can be inferred, and the parameters in the ISP parameter estimation model are corrected by fitting the first ISP parameters and the second ISP parameters. The training of the ISP parameter estimation model is a process of continuously iterating and correcting the parameters in the ISP parameter estimation model, so that the second ISP parameter estimated by the ISP parameter estimation model according to the guide diagram is closer to the first ISP parameter. In this way, the ISP parameter estimation model can learn the relationship between the pilot diagram and the ISP parameters in the device that generates the pilot diagram, so that the ISP parameter estimation model can estimate the ISP parameters in the device that generates the pilot diagram from any one pilot diagram.
It will be appreciated that if the ISP parameter estimation model uses a neural network, the specific training process is closely related to the specific structure of the neural network and is only exemplary. In practical application, a reasonable training mode can be set according to the model structure of the ISP parameter estimation model.
In the application scenario, when the mobile phone receives a user operation and determines that the user wants to adjust the shooting style of the image, the mobile phone can display a plurality of guide graphs. The mobile phone may display a plurality of guide graphs according to a preset display rule, and the display order of the plurality of guide graphs is described as an example below.
For example, each guide map can be scored through the image scoring model, and the mobile phone can display a plurality of guide maps according to the scoring height of the guide map. The image scoring model may be an image algorithm (for example, referred to as an image scoring algorithm), and the image scoring algorithm processes each guide graph and outputs a score of each guide graph. For example, the image scoring algorithm may generate the score of the guide graph according to the color contrast, texture features, user preference, and other features in the guide graph, and the proportion of each feature may be different. Wherein, the color contrast in the guide graph represents the brightness level difference between white in the brightest area and black in the darkest area in the image, and the larger the contrast of the guide graph is, the more abundant the color in the image is, which indicates that the device generating the guide graph can present the richer color. The texture features of the directed graph represent surface properties of the target object corresponding to the directed graph or the partial region in the directed graph. The user preference degree represents the user preference degree of the guide diagram, and the user preference degree is generally 1 or 0. If the guide map is marked as 'favorite', the user favorite of the guide map is 1; if the guide map is not marked as "favorite," the user's preference for the guide map is 0.
It should be noted that determining the color contrast of the guide graph and the image features such as texture features of the guide graph requires processing the guide graph by using a corresponding image algorithm. The image scoring algorithm may include a corresponding algorithm for obtaining the contrast of the image and the texture features of the image, so that the contrast of the directed graph and the texture features of the directed graph may be obtained when the mobile phone runs the image scoring algorithm, and the mobile phone may determine the scoring of the directed graph according to the contrast of the directed graph, the texture features, and the like.
After the image scoring model scores each guide graph (or called guide graph 1), the scores of the guide graphs 1 can be stored in a form of a table (for example, called a guide graph 1 scoring table). When the mobile phone displays a plurality of guide graphs 1, the mobile phone can read the scoring table of the guide graphs 1 and display the guide graphs 1 according to the scoring sequence of the guide graphs 1.
The photographing method provided by the embodiment of the application will be described below with reference to the accompanying drawings.
Example 1
The photographing method provided by the embodiment of the application can be applied to electronic equipment, and the electronic equipment is taken as a mobile phone as an example to explain the photographing method provided by the application. The camera application (or called as a camera) is installed in the mobile phone, the mobile phone runs the camera application, and the photographing method provided by the application can be applied in the process of generating the image. In this way, the mobile phone can generate an image having the same style as the guide map photographing.
Please refer to fig. 8, which is a flowchart of a photographing method provided in the present application. As shown in fig. 8, the method includes steps 801-806.
Step 801: the mobile phone runs the camera application and displays a photographed preview interface, wherein the preview interface comprises a preview image of the first photographing style.
It should be noted that, the mobile phone is in a photographing mode, the camera in the camera module of the mobile phone is in an open state, the light reflected by the target object is transmitted to the lens in the camera module, and the light reflected by the target object is transmitted to the image sensor after being refracted by the lens, so that the image sensor generates a raw domain image of the target object. Where the raw domain image may be an image in Bayer (Bayer) format, the raw domain image may also be referred to as a raw domain image in Bayer format. The image sensor transmits the resulting raw domain image to the ISP, which converts the raw domain image into a Red Green Blue (RGB) map. Furthermore, the mobile phone can generate preview images from continuous multi-frame images, and the preview images are displayed on a preview interface, so that a user can check the shooting range of a camera of the mobile phone and the shooting style of the images generated by the mobile phone in real time through a display screen.
When the mobile phone is in a photographing mode, the camera module in the mobile phone can be continuously exposed, so that the mobile phone continuously collects multi-frame images, and a preview interface displays the continuous multi-frame images. That is, the preview image is a multi-frame image. Assuming that the image shot by the mobile phone is the image in the first shooting style, each frame of image in the preview image is the image in the first shooting style. In this way, the user can know the shooting style of the mobile phone through the preview image in the preview interface. Referring to fig. 4 (b), a schematic diagram of displaying a photographed preview image for a mobile phone is shown, as shown in fig. 4 (b), a photographing interface displayed by the mobile phone includes a preview image 405 in a photographing mode.
Step 802: and the mobile phone identifies a target object in the preview image and displays at least one guide graph, wherein the guide graph comprises the target object in the preview image.
And the at least one guide graph comprises a first guide graph, and the shooting style of the guide graph is different from that of the preview image.
After the mobile phone obtains the photographed preview image, the mobile phone can identify the preview image and determine the target object included in the preview image. For example, as shown in fig. 4 (b), the preview image 405 is included in the photographed preview interface, the mobile phone recognizes the preview image 405, and determines that the target object in the preview image 405 includes sky, cloud, building, and tree. It will be appreciated that the preview image 405 is composed of a plurality of frames of images, and the preview image may also be referred to as a preview image stream.
In some implementations, the mobile phone may extract any frame of image from the preview image stream as a reference frame, and identify a target object in the reference frame, where the target object included in the reference frame is a target object included in the preview image.
In other implementations, the mobile phone may arbitrarily extract at least two arbitrary frames of images from the preview image stream as reference frames, identify target objects included in the reference frames, and use the target objects included in the reference frames as target objects of the preview images.
Illustratively, as shown in fig. 9 (a), a preview interface in the photographing mode is displayed on the mobile phone, the preview interface includes a preview image 405, the mobile phone recognizes that the target object in the preview image 405 includes sky, cloud, building and tree, and the mobile phone displays an interface of the photographed preview image as shown in fig. 9 (b). The interface shown in fig. 9 (b) includes at least one directed graph (may also be referred to as directed graph 1), such as a first directed graph 407, a second directed graph 408, and a third directed graph 409. For example, the target objects in the first directed graph 407 include sky, trees, and water; target objects in the second directed graph 408 include sky, cloud, mountains, and water; the target objects in the third guide map 409 include sky, water, and trees. At least one target object exists in each of the three guide graphs, and the target object is the same as that in the preview image.
It is understood that, in the interface shown in fig. 9 (b), three guide maps are superimposed on the preview image, and block part of the preview image. This display mode is merely an example, and in practical applications, the guide map may be displayed in the photographing interface in other display modes. For example, the mobile phone may display a preview interface in the photographing mode as shown in fig. 5 (b). The three guide maps are displayed below the preview image 405, and the preview image is not blocked, so that the display effect of the preview image is not affected.
It should be noted that the interface shown in fig. 9 (b) and the interface shown in fig. 5 (b) both include three guide maps, and in practical applications, a greater or lesser number of guide maps may be included in the photo preview interface. This is merely an example and does not limit the display position and the number of displays of the guide map.
In a possible implementation manner, the mobile phone runs a camera application and displays a preview interface for photographing, wherein the preview interface comprises a simulation photographing style control. As shown in fig. 5 (a), the preview interface of the cell phone displaying the photo includes a mimic photo style control 406, and in response to the user's operation of the mimic photo style control 406, the cell phone displays the preview interface as shown in fig. 5 (b). Alternatively, as shown in fig. 5 (a), the preview interface of the mobile phone displaying the photo includes a mimic shooting style control 406, and in response to the user's operation of the mimic shooting style control 406, the mobile phone displays the preview interface as shown in fig. 9 (b).
For example, step 802 may be "in response to a first operation by the user, the mobile phone displays at least one guide image, the shooting style of the guide image is different from that of the preview image". The preview interface displayed by the mobile phone for shooting comprises a simulated shooting style control, and the first operation can be a trigger operation of a user on the simulated shooting style control.
It can be understood that when the mobile phone displays a plurality of guide graphs 1, the plurality of guide graphs 1 can be displayed according to the scores of the guide graphs 1. The scoring method for guiding fig. 1 may be obtained by an image scoring model, and is not described herein again.
For example, the image scoring model may be preset in the mobile phone. For example, the mobile phone includes an image processing module, and the image processing module includes an image scoring algorithm. When the mobile phone runs the image processing module, the mobile phone can score the guide picture 1. For example, the camera application has the right to access the photo album application, the camera application recognizes the guide image 1 in the photo album, and the mobile phone runs the image processing module to score each guide image 1. The mobile phone identifies the guide map 1 to obtain an image (for example, referred to as a guide map 2) including at least one target object in the preview image, and the mobile phone can display a plurality of guide maps 2 according to the scores of the guide maps 2.
It should be noted that the guide map 1 in the mobile phone may be an image generated by the mobile phone in another shooting style, and may be an image downloaded from the internet by the mobile phone or an image in shared data received by the mobile phone. The source of the guide map 1 is not particularly limited in the embodiments of the present application.
Further illustratively, an application scene in which the mobile phone interacts with a remote server (or cloud device) in the process of generating the image by the mobile phone is taken as an example. As shown in fig. 10, a cell phone 901 interacts with a remote server 902 for the purpose of simulating a shooting style. An image scoring algorithm is preset in the remote server 902, the remote server 902 scores a plurality of guide graphs 1, and the score of each guide graph 1 is used as a label of the image. When the mobile phone runs a camera application and displays a photographed preview image, the mobile phone 901 transmits the preview image to the remote server 902.
For example, the remote server 902 may transmit a plurality of guide maps 1 to the mobile phone 901 according to the preview image, the mobile phone 901 receives the plurality of guide maps 1, recognizes the labels of the guide maps 1, determines the score of the guide maps 1, and may display the guide maps 1 according to the scores of the plurality of guide maps 1.
As another example, remote server 902 may identify the preview image to determine the target object that the preview image includes, which would determine to direct to FIG. 2 in FIG. 1. The remote server 902 may send the guidance map 2 to the mobile phone 901, and the mobile phone 901 receives the guidance map 2 from the remote server 902, identifies the label of the guidance map 2, determines the score of the guidance map 2, and displays the guidance map 2 according to the score of the guidance map 2.
Note that if the handset 901 sends multiple guidance diagrams 1 to the remote server 902, the remote server 902 may score the guidance diagrams 1, and take the score of the guidance diagrams 1 as the label of the guidance diagrams 1. The remote device scores each guide image 1 and sends the scored guide images 1 to the mobile phone 901.
It should be noted that if the image scoring model is not set in the mobile phone and the mobile phone does not interact with the remote server, the mobile phone may display the guidance map 1 according to any display sequence.
Step 803: in response to a selection operation on the first guide graph, the mobile phone determines a first ISP parameter of a device generating the first guide graph.
It can be understood that the parameters of the ISP are different in the electronic devices manufactured by different manufacturers, and the shooting style of the image generated by using the shooting function of the electronic device is different. When a user wants to use the photographing function of the mobile phone to generate images of other photographing styles, the mobile phone determines the images of the photographing style (namely the first guide graph) that the user wants to obtain, and the mobile phone can simulate the photographing style of the first guide graph to generate the images with the same photographing style as the first guide graph so as to meet the requirements of the user.
Wherein the first ISP parameter of the device taking the first guide map may be determined by the ISP parameter estimation model. The above-mentioned manner for determining the first ISP parameter of the device for generating the first guide map by the mobile phone has been described in detail, and is not described herein again.
Step 804: the mobile phone sets the ISP parameters as the first ISP parameters.
Assuming that the mobile phone presets ISP parameters, the mobile phone generates an image with a shooting style 1. For example, the preview image 405 is an image of shooting style 1. The first guide image is an image of photographing style 2, and photographing style 1 is different from photographing style 2. And the mobile phone sets the ISP parameters as the first ISP parameters, so that the mobile phone can generate an image with the shooting style 2. Therefore, the mobile phone can realize the purpose of simulating the shooting style.
Illustratively, a preset ISP parameter file is stored in a memory of the mobile phone, and when the ISP runs, the preset ISP parameter file may be accessed to an address of the preset ISP parameter file, and the preset ISP parameter file is called to analyze the preset ISP parameter file to obtain the preset ISP parameter. When the ISP adopts the preset algorithm to process the image, the preset ISP parameters are used as the parameters in the algorithm, so that the ISP can realize the function of processing the image.
The preset algorithm can be understood as a functional relationship between input and output, and the preset algorithm includes parameters, and the values corresponding to the parameters affect the processing effect of the algorithm. For example, the color correction algorithm in the ISP, and the processing parameters of the color correction algorithm are included in the ISP parameters. And presetting the ISP parameters to obtain processing parameters in the color correction algorithm after analysis, and calling related parameters in the ISP when the ISP runs the color correction algorithm so as to realize the function of correcting the color of the image.
In some implementations, the mobile phone determines to generate a first ISP parameter corresponding to the first navigation map, and the mobile phone system has a right to access and modify a preset ISP parameter file. The mobile phone generates a first ISP parameter file according to the first ISP parameter, and the mobile phone can change the preset ISP parameter file into the first ISP parameter file. The mobile phone may generate a first data packet from the first ISP parameter, and convert the first data packet into a data packet of a binary file, where the data packet of the binary file is the first ISP parameter file.
It will be appreciated that when the pre-set ISP parameter file is replaced with the first ISP parameter file, the ISP runtime will call the first ISP parameter file to process the raw domain image. In this case, the mobile phone can generate an image of photographing style 2.
Step 805: and displaying a photographed preview interface by the mobile phone, wherein the preview interface comprises a preview image of a second photographing style.
It can be understood that, when the preset ISP parameter is set as the first ISP parameter by the mobile phone, the ISP may suspend processing of the raw domain image, which may affect the display of the preview image in the photographing mode. In this case, the handset may pause displaying the preview image, such as displaying the first guide map, or the handset may display a prompt waiting for the preview image.
After the mobile phone sets the ISP parameter as the first preset parameter, the mobile phone displays a preview interface for photographing. And if the ISP parameter called by the ISP is changed, the preview image displayed by the mobile phone is the image in the second shooting style. Therefore, the user can know the photographing effect of the mobile phone in the preview interface.
When the mobile phone displays the preview interface, the preview interface can further comprise a control for switching the shooting style. As shown in fig. 5 (c), the preview interface of the photographing mode includes a mimic shooting style control 406. If the handset receives a user-triggered action on the mimic capture style control 406, the handset may perform steps 802-805 described above.
Step 806: and triggering a shooting key of the mobile phone, and generating a first shot image by the mobile phone, wherein the shooting style of the first shot image is the same as that of the first guide image.
If the mobile phone displays the preview image of the first shooting style and the shooting key of the mobile phone is triggered, the mobile phone generates the first shot image. The first guide image is an image of the second photographing style, and then the first photographed image is also an image of the second photographing style.
It can be understood that, when the mobile phone uses the first ISP parameter, if the photo mode is switched to the video recording mode (or referred to as the video mode), the mobile phone may generate a video file. The video file is composed of a plurality of frames of images, and each frame of image is an image of the second shooting style.
In some implementations, when the cell phone turns off the camera application, the ISP parameter may be a preset ISP parameter. That is, each time the mobile phone starts the camera application, when the mobile phone displays the photographed preview interface, the preview images in the preview interface are all preview images in the first photographing style.
Example 2
In the process of editing an image by a mobile phone, the image with the same shooting style as that of the guide graph is obtained by using the shooting method provided by the embodiment of the application.
Please refer to fig. 11, which is a flowchart of a photographing method provided in the present application. As shown in fig. 11, the method includes steps 1101-1106.
Step 1101: and triggering a mobile phone shooting key, generating a second shooting image by the mobile phone, and storing a raw domain image corresponding to the second shooting image.
Note that the second captured image is an image of the first capture style. That is, when the ISP parameter is equal to the preset ISP parameter, the mobile phone generates the second captured image. It can be understood that, when the mobile phone generates the first captured image, the raw domain image corresponding to the first captured image may also be stored.
The method comprises the steps that the ISP parameters are changed by adjusting the image shooting style, so that the mobile phone generates images of other shooting styles. Therefore, the mobile phone can store the raw domain image of the target object, so that the ISP in the mobile phone processes the raw domain image again to obtain other shooting styles (such as the first shooting style or the third shooting style).
Illustratively, the mobile phone is in a photographing mode, a photographing key of the mobile phone is triggered, and the image sensor acquires a raw domain image. The mobile phone copies the raw domain image to generate a first raw domain image and a second raw domain image, and the first raw domain image and the second raw domain image are the same. The first raw domain image is transmitted to the ISP, which processes the raw domain image to generate a second captured image. Meanwhile, the mobile phone stores the second raw domain image.
In some implementations, when the mobile phone generates the second shot image of the target object, the raw domain image collected by the mobile phone image sensor is saved by default, so that a function of simulating a shooting style can be provided when the image is edited later.
In other implementations, because the memory space occupied by the raw domain image is large, the mobile phone can display prompt information before the shooting key is triggered, and the prompt information is used for prompting whether the shooting style of the shot image generated in the later stage needs to be edited or not. If the cellular phone receives an operation that requires editing of the photographing style of the photographed image generated at the later stage, the cellular phone stores the raw field image of the photographed image (i.e., the second photographed image). If the mobile phone receives the operation that the shooting style of the shot image generated in the later period does not need to be edited, the mobile phone does not store the raw domain image corresponding to the second shot image, and the memory space of the mobile phone is saved.
Step 1102: in response to an operation of editing the image, the mobile phone displays an image editing interface including the second captured image.
Here, as an image editing interface shown in fig. 6 (a), the image editing interface includes a second captured image (i.e., an image 601). In the foregoing scenario two, fig. 6 (a) is already described, and details are not repeated here.
Illustratively, the mobile phone receives a click operation of the editing operation control 63 by the user, and determines that the user wants to edit the second captured image. In response to receiving a touch operation of the editing operation control 63 by the user, the mobile phone displays an interface shown in (b) in fig. 6, and the mobile phone can edit the second captured image according to the operation of the user. The interface shown in fig. 6 (b) includes a simulation shooting style control 66, an exposure control 67, a contrast control 68, and the like, and the positions of the plurality of controls may be slid left and right in response to a left and right slide operation. And when the corresponding control is located at the middle position, displaying the name of the control. As shown in fig. 6 (b), the simulated shooting style control 66 is located at an intermediate position, and the name "simulated shooting style" of the control is displayed.
Step 1103: and the mobile phone receives the operation of adjusting the shooting style and displays at least one guide graph.
For example, when the mobile phone displays the interface shown in fig. 6 (b), and the mobile phone receives a click operation of the simulated shooting style control 66 from the user, the mobile phone determines that the user has a need to adjust the shooting style of the second shot image. In response to the touch operation of the simulated shooting style control, the mobile phone displays an interface as shown in fig. 6 (c), and the interface for editing the first shot image includes a first directed graph 408, a second directed graph 408, and a third directed graph 409.
When the mobile phone displays multiple guide graphs, the content is the same as that in step 802 in embodiment 1, and details are not repeated here, and it is understood that relevant technical details in this embodiment may also be applied to embodiment 1.
Step 1104: and responding to the selection operation of the user on the first guide diagram, and determining a first ISP parameter corresponding to the first guide diagram by the mobile phone.
It will be appreciated that the first ISP parameter in the device generating the first directed graph is determined by the ISP parameter estimation model described above. Therefore, for the ISP parameter estimation model, reference may be made to the above-mentioned related implementation steps for a specific determination manner of the first ISP parameter, which is not described herein again.
Step 1105: and the mobile phone sets the ISP parameters as the first ISP parameters.
It should be noted that step 1104 and step 1105 are the same as step 803 and step 804, and the implementation details of the related steps can be applied to step 804 and step 805, which are not described herein again.
Step 1106: the mobile phone processes the raw domain image by adopting the ISP to obtain a first shot image, wherein the shooting style of the first shot image is the same as that of the first guide graph.
The mobile phone copies the original raw domain image to obtain a first raw domain image and a second raw domain image, and the mobile phone stores the second raw domain image. And when the mobile phone ISP parameters are set as the first ISP parameters, the ISP receives the second raw domain image, and the ISP processes the second raw domain image to generate a first shot image. Since the parameter of the ISP in the mobile phone is the first ISP parameter, the shooting style of the first shot image is the same as the shooting style of the first guide map.
It should be noted that, when the mobile phone generates the second captured image by using the second raw domain image, the memory space for storing the second raw domain image may be released. In this case, the cellular phone cannot reproduce a photographed image of another photographing style. Although the guide map may include the target object in at least one first captured image, the content of the guide map displayed by the mobile phone is generally not identical to the content of the first captured image. If the user wants to know that another guide image (such as a second guide image) is selected, the shot image is generated by the mobile phone. Since the mobile phone generates the second shot image according to the second raw domain image, the mobile phone is difficult to generate an image corresponding to the shooting style according to the second guide map.
In some implementations, the mobile phone may copy the second raw domain image to obtain a third raw domain image, and use the third raw domain image as an input of the ISP to generate the second captured image. When the user views the second shot image, the mobile phone can display the first prompt message, and whether the first prompt message user stores the second shot image or not. And if the mobile phone receives a storage instruction for storing the second shot image, the mobile phone stores the second shot image. Further, the mobile phone can also display second prompt information, wherein the second prompt information is used for prompting the user whether to generate images of other shooting styles, and if the mobile phone receives an instruction for generating the other shooting styles. The mobile phone displays at least one guide image again, wherein the guide image does not comprise the first guide image (the shooting style of the first guide image is the same as that of the second shot image). In addition, if the mobile phone receives an instruction of not storing the second shot image, the mobile phone can also display second prompt information.
It is understood that the mobile phone may repeatedly display the first prompt message and the second prompt message so that the user can photograph the same style of photographed image as the guide map by viewing. If the mobile phone receives the image of which the generation of the other shooting style is stopped, the mobile phone can delete the saved second raw field image. And releasing the mobile phone memory to reduce the mobile phone memory space occupied by the mobile phone during operation.
The above description has been given by taking the electronic device as a mobile phone as an example, and when the electronic device is another device, the image may be generated by the above-described method. And will not be described in detail herein.
It is understood that the electronic device includes hardware structures and/or software modules for performing the functions in order to realize the functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed in hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
An embodiment of the present application further provides an electronic device, including: the camera module is used for collecting images; a display screen for displaying the preview image and the generated photographed image; one or more processors and one or more memories. One or more memories coupled to the one or more processors are used to store computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to perform the associated method steps described above to implement the method of taking a picture in the above embodiments.
Embodiments of the present application further provide a chip system, where the chip system includes at least one processor and at least one interface circuit. The processor and the interface circuit may be interconnected by wires. For example, the interface circuit may be used to receive signals from other devices (e.g., a memory of an electronic device). As another example, the interface circuit may be used to send signals to other devices (e.g., a processor). Illustratively, the interface circuit may read instructions stored in the memory and send the instructions to the processor. The instructions, when executed by the processor, may cause the electronic device to perform the various steps in the embodiments described above. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium includes computer instructions, and when the computer instructions are run on the electronic device, the electronic device is enabled to execute each function or step executed by the mobile phone in the foregoing method embodiment.
The embodiment of the present application further provides a computer program product, which when running on a computer, causes the computer to execute each function or step executed by the mobile phone in the above method embodiments.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or multiple physical units, that is, may be located in one place, or may be distributed in multiple different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A photographing method is characterized by being applied to electronic equipment, wherein the electronic equipment comprises a preset Image Signal Processor (ISP) parameter, and the image signal processor processes a first raw domain image by adopting the preset ISP parameter to generate a first image;
the photographing method comprises the following steps:
acquiring a first guide graph, and determining a first ISP parameter corresponding to the first guide graph, wherein the preset ISP parameter is different from the first ISP parameter;
setting an ISP parameter to the first ISP parameter;
and the image signal processor processes the first raw domain image by adopting the first ISP parameters to generate a second image.
2. The method according to claim 1, wherein before the obtaining of the first guidance map and the determining of the first ISP parameter corresponding to the first guidance map, and the preset ISP parameter being different from the first ISP parameter, the method further comprises:
starting a camera application, and displaying a preview interface, wherein the preview interface comprises a first control and a first preview image, and the image signal processor processes an image signal by adopting the preset ISP parameters to generate the first preview image;
and responding to the clicking operation of the user on the first control, and displaying at least one third image, wherein the at least one third image comprises the first guide graph.
3. The method according to claim 1, wherein before the obtaining of the first guide map and the determining of the first ISP parameter corresponding to the first guide map, and the preset ISP parameter being different from the first ISP parameter, the method further comprises:
starting a camera application, and displaying a preview interface, wherein the preview interface comprises a first preview image, and the image signal processor processes an image signal by adopting the preset ISP parameters to generate the first preview image;
identifying individual objects in the first preview image;
displaying at least one fourth image according to an individual object in the first preview image, wherein the at least one fourth image comprises the first guide graph; wherein each of the fourth images includes at least one of the individual objects.
4. The method according to claim 1, wherein before the obtaining of the first guide map and the determining of the first ISP parameter corresponding to the first guide map, and the preset ISP parameter being different from the first ISP parameter, the method further comprises:
in response to receiving an operation of editing the first image, displaying an editing interface of the first image, wherein the editing interface of the first image comprises a second control;
and responding to the clicking operation of the user on the second control, and displaying at least one third image, wherein the at least one third image comprises the first guide graph.
5. A method according to claim 2 or 3, wherein the image signal processor processes the first raw domain image using the first ISP parameters, the method further comprising, before generating a second image:
the preview interface comprises a second preview image, and the image signal processor processes an image signal by adopting the first ISP parameter to generate the second preview image;
and triggering a shooting key to acquire the first raw domain image.
6. The method of claim 4, wherein, in response to receiving an operation to edit the first image, displaying an editing interface for the first image, the editing interface for the first image comprising a second control, the method further comprises:
acquiring a second raw domain image, generating the first raw domain image, and storing the first raw domain image, wherein the first raw domain image is the same as the second raw domain image;
and the image processor processes the second raw domain image by adopting the preset ISP parameters to generate the first image.
7. The method of any of claims 1-6, wherein the first ISP parameters are derived by processing the first guide map through a neural network.
8. The method of claim 2 or 4, further comprising:
and scoring the at least one third image through an image scoring model, and displaying the at least one third image according to the score of the at least one third image.
9. The method of claim 3, wherein before displaying at least one fourth image based on individual objects in the preview image, the at least one fourth image comprising the first guide map, the method further comprises:
and scoring the at least one fourth image through an image scoring model, and displaying the at least one fourth image according to the score of the at least one fourth image.
10. The method of claim 6, wherein the image processor processes the second raw domain image using the preset ISP parameters, and after generating the first image, the method further comprises:
displaying prompt information, wherein the prompt information is used for prompting a user to trigger and edit the first image within a preset time length;
and if the operation of editing the first image is not received within the preset duration, deleting the first raw image.
11. An electronic device, comprising:
the camera is used for collecting images; an image signal processor for processing raw domain images; the display screen is used for displaying an interface;
a memory, one or more processors; the one or more processors are coupled with the memory, the camera, the signal processor and the display screen;
and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions; the instructions, when executed by the electronic device, cause the electronic device to perform the method of taking a picture as claimed in any one of claims 1-10.
12. A computer-readable storage medium comprising computer instructions which, when executed on a computer, cause the computer to perform the photographing method according to any one of claims 1 to 10.
CN202011482589.5A 2020-12-15 2020-12-15 Photographing method and related equipment Active CN114640783B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011482589.5A CN114640783B (en) 2020-12-15 2020-12-15 Photographing method and related equipment
PCT/CN2021/135118 WO2022127611A1 (en) 2020-12-15 2021-12-02 Photographing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011482589.5A CN114640783B (en) 2020-12-15 2020-12-15 Photographing method and related equipment

Publications (2)

Publication Number Publication Date
CN114640783A true CN114640783A (en) 2022-06-17
CN114640783B CN114640783B (en) 2024-03-05

Family

ID=81944360

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011482589.5A Active CN114640783B (en) 2020-12-15 2020-12-15 Photographing method and related equipment

Country Status (2)

Country Link
CN (1) CN114640783B (en)
WO (1) WO2022127611A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115589539A (en) * 2022-11-29 2023-01-10 荣耀终端有限公司 Image adjusting method, device and storage medium
CN117560552A (en) * 2024-01-10 2024-02-13 荣耀终端有限公司 Shooting control method, electronic device and readable storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116048379B (en) * 2022-06-30 2023-10-24 荣耀终端有限公司 Data recharging method and device
CN115442517B (en) * 2022-07-26 2023-07-25 荣耀终端有限公司 Image processing method, electronic device, and computer-readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080091966A1 (en) * 2006-10-13 2008-04-17 Alpha Imaging Technology, R.O.C. Structure and method for booting an image signal processor
CN105516507A (en) * 2015-12-25 2016-04-20 联想(北京)有限公司 Information processing method and electronic equipment
CN106454097A (en) * 2016-10-29 2017-02-22 深圳市金立通信设备有限公司 Photographing method and photographing device
US20190043209A1 (en) * 2018-02-05 2019-02-07 Intel Corporation Automatic tuning of image signal processors using reference images in image processing environments
CN111885295A (en) * 2018-03-26 2020-11-03 华为技术有限公司 Shooting method, device and equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4325415B2 (en) * 2004-01-27 2009-09-02 株式会社ニコン An electronic camera having a finish setting function and a processing program for customizing the finish setting function of the electronic camera.
US8214766B1 (en) * 2008-07-09 2012-07-03 Adobe Systems Incorporated Method and system for preview control for image adjustment
US9019400B2 (en) * 2011-05-31 2015-04-28 Olympus Imaging Corp. Imaging apparatus, imaging method and computer-readable storage medium
KR20160146281A (en) * 2015-06-12 2016-12-21 삼성전자주식회사 Electronic apparatus and method for displaying image
CN109788204A (en) * 2019-03-12 2019-05-21 维沃移动通信有限公司 Shoot processing method and terminal device
CN110399924B (en) * 2019-07-26 2021-09-07 北京小米移动软件有限公司 Image processing method, device and medium
CN110533611A (en) * 2019-08-26 2019-12-03 维沃移动通信有限公司 Image processing method and terminal device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080091966A1 (en) * 2006-10-13 2008-04-17 Alpha Imaging Technology, R.O.C. Structure and method for booting an image signal processor
CN105516507A (en) * 2015-12-25 2016-04-20 联想(北京)有限公司 Information processing method and electronic equipment
CN106454097A (en) * 2016-10-29 2017-02-22 深圳市金立通信设备有限公司 Photographing method and photographing device
US20190043209A1 (en) * 2018-02-05 2019-02-07 Intel Corporation Automatic tuning of image signal processors using reference images in image processing environments
CN111885295A (en) * 2018-03-26 2020-11-03 华为技术有限公司 Shooting method, device and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115589539A (en) * 2022-11-29 2023-01-10 荣耀终端有限公司 Image adjusting method, device and storage medium
CN117560552A (en) * 2024-01-10 2024-02-13 荣耀终端有限公司 Shooting control method, electronic device and readable storage medium

Also Published As

Publication number Publication date
CN114640783B (en) 2024-03-05
WO2022127611A1 (en) 2022-06-23

Similar Documents

Publication Publication Date Title
CN114640783B (en) Photographing method and related equipment
US11412153B2 (en) Model-based method for capturing images, terminal, and storage medium
CN113592887B (en) Video shooting method, electronic device and computer-readable storage medium
US20230217098A1 (en) Shooting method, graphical interface, and related apparatus
CN106030503A (en) Adaptive video processing
CN111508039A (en) Word processing method of ink screen and communication terminal
CN113110731B (en) Method and device for generating media content
CN115242983A (en) Photographing method, electronic device, computer program product, and readable storage medium
WO2021180046A1 (en) Image color retention method and device
CN117201930B (en) Photographing method and electronic equipment
CN113891008B (en) Exposure intensity adjusting method and related equipment
CN117274106B (en) Photo restoration method, electronic equipment and related medium
CN115546041B (en) Training method of light supplementing model, image processing method and related equipment thereof
EP4277257A1 (en) Filming method and electronic device
CN117170560B (en) Image transformation method, electronic equipment and storage medium
CN116708996B (en) Photographing method, image optimization model training method and electronic equipment
CN115225822B (en) Data processing method and electronic equipment
CN114245011B (en) Image processing method, user interface and electronic equipment
EP4343698A1 (en) Modeling method and related electronic device, and storage medium
TWI410128B (en) Digital camera and operating method thereof
CN116709002A (en) Video shooting method, electronic device and computer readable storage medium
CN114363507A (en) Image processing method and device
CN116567408A (en) Display method and electronic equipment
CN117119316A (en) Image processing method, electronic device, and readable storage medium
CN117201929A (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant