CN116029951A - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN116029951A
CN116029951A CN202210588320.8A CN202210588320A CN116029951A CN 116029951 A CN116029951 A CN 116029951A CN 202210588320 A CN202210588320 A CN 202210588320A CN 116029951 A CN116029951 A CN 116029951A
Authority
CN
China
Prior art keywords
image
processing
semantic
area
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210588320.8A
Other languages
Chinese (zh)
Inventor
荀潇阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210588320.8A priority Critical patent/CN116029951A/en
Publication of CN116029951A publication Critical patent/CN116029951A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application relates to the field of image processing, and provides an image processing method and electronic equipment, wherein the image processing method comprises the following steps: starting a camera application program in the electronic equipment; acquiring a first image; performing first image processing on the first image to obtain a second image, wherein the second image is an image in a second color space; carrying out semantic segmentation processing on a first image to obtain a semantic segmentation result, wherein the semantic segmentation result is used for indicating semantic information in the first image; invoking target parameters based on the semantic segmentation result, wherein the target parameters correspond to the semantic segmentation result; performing second image processing on the first image based on the target parameters to obtain a target image; and carrying out fusion processing on the target image and the second image to obtain a fusion image, wherein the image quality of the fusion image is better than that of the second image. Based on the technical scheme, the detail information and/or definition in the fusion image can be improved, so that the image quality of the fusion image is improved.

Description

Image processing method and electronic equipment
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method and an electronic device.
Background
With rapid development and wide application of multimedia technology and network technology, people use image information in a large amount in daily life and production activities. Currently, when image processing is performed by an image signal processing (Image Signal Processing, ISP) path in an electronic device, image processing can only be performed based on a set of general parameters for different areas in an image; however, when image processing is performed based on a set of parameters, since the global image needs to be balanced by a set of general parameters, the image quality of the local image area in the image may be sacrificed, so that different areas in the image cannot reach the optimal image effect; for example, assuming that an image includes a near view region and a far view region, the information amount of the far view region is generally smaller than that of the near view region, if image processing is performed with the far view region as a reference, the near view region may have a problem of excessively high sharpness of details; if image processing is performed with the near field as a reference, there may be a problem that detailed information of the far field is small.
Therefore, how to process an image and improve the image quality is a problem to be solved.
Disclosure of Invention
The application provides an image processing method and electronic equipment, which can improve detail information and/or definition in a fused image, so that the image quality of the fused image is improved.
In a first aspect, an image processing method is provided, applied to an electronic device, and the image processing method includes:
starting a camera application program in the electronic equipment;
acquiring a first image, wherein the first image is an image of a first color space;
performing first image processing on the first image to obtain a second image, wherein the second image is an image in a second color space;
carrying out semantic segmentation processing on the first image to obtain a semantic segmentation result, wherein the semantic segmentation result is used for indicating semantic information in the first image;
invoking a target parameter based on the semantic segmentation result, wherein the target parameter corresponds to the semantic segmentation result;
performing second image processing on the first image based on the target parameters to obtain a target image;
and carrying out fusion processing on the target image and the second image to obtain a fusion image, wherein the image quality of the fusion image is better than that of the second image.
In the embodiment of the application, the target parameters of the image processing adapted to different semantic information can be obtained based on the semantic segmentation result in the first image; image processing is carried out on the first image based on the target parameters, so that local enhancement processing can be carried out on different semantic areas in the image, and a target image, namely a local enhancement image, is obtained; the local enhanced image and the second image are fused to obtain a fused image, and detail information or definition in the fused image is superior to that of the second image; compared with the existing image processing method adopting a group of general parameters, the image processing method of the application has no problem of sacrificing local image areas in the image caused by the need of balancing the image overall situation through the group of general parameters; the target parameters are parameters which are obtained based on semantic segmentation results and are adapted to semantic information in the image; therefore, image processing of the first image based on the target parameter enables image enhancement of the local image area; in other words, by the image processing method, details and/or definition of different semantic areas in the fused image can be improved, so that the image quality of the fused image is improved.
In addition, the image processing method can realize image enhancement based on different semantic information in ISP processing; compared with the method that an additional algorithm is added for image enhancement after ISP processing, the image processing method provided by the application has lower performance requirements on the electronic equipment, and can save the power consumption of the electronic equipment to a certain extent.
With reference to the first aspect, in some implementations of the first aspect, the fusing the target image and the second image to obtain a fused image includes:
and carrying out fusion processing on the target image and the second image based on the semantic segmentation result to obtain the fusion image.
In the embodiment of the application, since the target image is a local enhanced image, an enhanced image area can be determined in the target image based on a semantic segmentation result, and the enhanced image area and the second image are fused, so that the detail information or the definition and the like in the fused image are improved, and the image quality of the fused image is improved.
With reference to the first aspect, in some implementations of the first aspect, the performing the fusion processing on the target image and the second image based on the semantic segmentation result to obtain the fused image includes:
Determining a first image region in the target image based on the semantic segmentation result;
determining a second image area based on the first image area, wherein the second image area comprises the image content of the first image area, and the area of the second image area is larger than that of the first image area;
and carrying out fusion processing on the second image area and the second image to obtain the fusion image.
In the embodiment of the application, the second image area can be adopted to determine the image area corresponding to the semantic information in the target image when the fusion processing is performed, and the fusion processing is performed on the image area and the second image, namely, the fusion mode of edge expansion based on the semantic position is adopted; the fusion mode of edge expansion based on the semantic position can enable the obtained fusion image to be in smooth transition in different image areas, the problem that the edges of the areas in different areas in the image are abrupt is avoided, and therefore the image quality of the fusion image is improved.
With reference to the first aspect, in certain implementation manners of the first aspect, the determining a second image area based on the first image area includes:
and carrying out up-sampling processing on the first image area to obtain the second image area.
With reference to the first aspect, in certain implementations of the first aspect, the second image area is a circular image area.
In the embodiment of the application, the round edge is smoother than the rectangle, so that the problem that the fusion edge in the fusion image obtained after fusion processing is abrupt or discontinuous in color, detail and the like can be reduced by adopting a round edge expanding mode, and the image quality of the fusion image is improved; in addition, when the fusion coefficient is calculated by adopting a round edge-expanding mode, a repeated calculation area basically does not exist, and compared with the rectangular edge-expanding mode, the performance requirement is lower.
In one possible implementation, if the first image area in the target image is an image area with a corner, a circular edge-expanding mode may be used to obtain the second image area.
For example, the center of the first image area is determined first, and a circular second image area is obtained based on the expansion of the center of the first image area.
In one possible implementation, if the first image area in the target image is an image area without an edge angle, the second image area may be obtained by using an upsampling edge-expanding method.
With reference to the first aspect, in some implementations of the first aspect, the fusing the target image and the second image to obtain a fused image includes:
And carrying out fusion processing on the image of the first channel of the target image and the image of the first channel of the second image to obtain the fusion image.
With reference to the first aspect, in certain implementations of the first aspect, the first channel is a Y channel, or the first channel is a UV channel.
In one possible implementation, if the detail information of the fused image is to be promoted, the reference image and the Y-channel image in each local enhanced image may be extracted for fusion processing.
In one possible implementation, if color information of the fused image is to be promoted, the reference image and the images of the UV channels in each local enhanced image may be extracted for fusion processing.
With reference to the first aspect, in certain implementation manners of the first aspect, the semantic segmentation result includes at least two labels, where the at least two labels include a first label and a second label, the first label is used to indicate semantic information of a third image area in the first image, the second label is used to indicate semantic information of a fourth image area in the first image, the target parameter includes a first parameter and a second parameter, the first parameter corresponds to the first label, the second parameter corresponds to the second label, the target image includes a first target image and a second target image, and the second image processing is performed on the first image based on the target parameter to obtain a target image, where the method includes:
Performing the second image processing on the first image based on the first parameter to obtain the first target image;
and carrying out second image processing on the first image based on the second parameter to obtain the second target image.
In one possible implementation manner, the first image may include a green plant and a portrait, and the first image may be subjected to ISP processing based on a semantic tag corresponding to the green plant by calling a first parameter corresponding to the green plant tag from a preset parameter set to obtain an enhanced image of the green plant area; based on the semantic tags corresponding to the portraits, second parameters corresponding to the portraits tags can be called from a preset parameter set to carry out ISP processing on the first image, and an enhanced image of the portraits area is obtained.
In a second aspect, an electronic device is provided that includes one or more processors and memory; the memory is coupled with the one or more processors, the memory is for storing computer program code, the computer program code comprising computer instructions that the one or more processors call to cause the electronic device to perform:
Starting a camera application program in the electronic equipment;
acquiring a first image, wherein the first image is an image of a first color space;
performing first image processing on the first image to obtain a second image, wherein the second image is an image in a second color space;
carrying out semantic segmentation processing on the first image to obtain a semantic segmentation result, wherein the semantic segmentation result is used for indicating semantic information in the first image;
invoking a target parameter based on the semantic segmentation result, wherein the target parameter corresponds to the semantic segmentation result;
performing second image processing on the first image based on the target parameters to obtain a target image;
and carrying out fusion processing on the target image and the second image to obtain a fusion image, wherein the image quality of the fusion image is better than that of the second image.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
and carrying out fusion processing on the target image and the second image based on the semantic segmentation result to obtain the fusion image.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
determining a first image region in the target image based on the semantic segmentation result;
determining a second image area based on the first image area, wherein the second image area comprises the content of the first image area, and the area of the second image area is larger than that of the first image area;
and carrying out fusion processing on the second image area and the second image to obtain the fusion image.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
and carrying out up-sampling processing on the first image area to obtain the second image area.
With reference to the second aspect, in certain implementations of the second aspect, the second image area is a circular image area.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
And carrying out fusion processing on the image of the first channel of the target image and the image of the first channel of the second image to obtain the fusion image.
With reference to the second aspect, in certain implementations of the second aspect, the first channel is a Y channel, or the first channel is a UV channel.
With reference to the second aspect, in certain implementations of the second aspect, the semantic segmentation result includes at least two labels, the at least two labels including a first label and a second label, the first label being used to indicate semantic information of a third image region in the first image, the second label being used to indicate semantic information of a fourth image region in the first image, the target parameter including a first parameter and a second parameter, the first parameter corresponding to the first label, the second parameter corresponding to the second label, the target image including a first target image and a second target image, the one or more processors invoking the computer instructions to cause the electronic device to perform:
performing the second image processing on the first image based on the first parameter to obtain the first target image;
And carrying out second image processing on the first image based on the second parameter to obtain the second target image.
In a third aspect, an electronic device is provided, comprising means for performing the image processing method of the first aspect or any implementation of the first aspect.
In a fourth aspect, an electronic device is provided, the electronic device including one or more processors, a memory, a first camera module, and a second camera module; the memory is coupled with the one or more processors, the memory for storing computer program code, the computer program code comprising computer instructions that the one or more processors call to cause the electronic device to perform the image processing method of the first aspect or any implementation of the first aspect.
In a fifth aspect, a chip system is provided, the chip system being applied to an electronic device, the chip system comprising one or more processors for invoking computer instructions to cause the electronic device to perform the image processing method of the first aspect or any implementation of the first aspect.
In a sixth aspect, there is provided a computer readable storage medium storing computer program code which, when executed by an electronic device, causes the electronic device to perform the image processing method of the first aspect or any implementation manner of the first aspect.
In a seventh aspect, there is provided a computer program product comprising: computer program code which, when run by an electronic device, causes the electronic device to perform the image processing method of the first aspect or any implementation of the first aspect.
In the embodiment of the application, the target parameters of the image processing adapted to different semantic information can be obtained based on the semantic segmentation result in the first image; image processing is carried out on the first image based on the target parameters, so that local enhancement processing can be carried out on different semantic areas in the image, and a target image, namely a local enhancement image, is obtained; the local enhanced image and the second image are fused to obtain a fused image, and detail information or definition in the fused image is superior to that of the second image; compared with the existing image processing method adopting a group of general parameters, the image processing method of the application has no problem of sacrificing local image areas in the image caused by the need of balancing the image overall situation through the group of general parameters; the target parameters are parameters which are obtained based on semantic segmentation results and are adapted to semantic information in the image; therefore, image processing of the first image based on the target parameter enables image enhancement of the local image area; in other words, by the image processing method, details and/or definition of different semantic areas in the fused image can be improved, so that the image quality of the fused image is improved.
In addition, the image processing method can realize image enhancement based on different semantic information in ISP processing; compared with the method that an additional algorithm is added for image enhancement after ISP processing, the image processing method provided by the application has lower performance requirements on the electronic equipment, and can save the power consumption of the electronic equipment to a certain extent.
Drawings
FIG. 1 is a schematic diagram of a hardware system suitable for use with the electronic device of the present application;
FIG. 2 is a schematic diagram of a software system suitable for use with the electronic device of the present application;
FIG. 3 is a schematic diagram of a prior art ISP pathway flow;
FIG. 4 is a schematic diagram of an application scenario suitable for use in embodiments of the present application;
FIG. 5 is a schematic flow chart of an image processing method provided by an embodiment of the present application;
FIG. 6 is a schematic flow chart of an image processing method provided by an embodiment of the present application;
FIG. 7 is a schematic flow chart of an image processing method provided by an embodiment of the present application;
FIG. 8 is a schematic flow chart of an image processing method provided by an embodiment of the present application;
FIG. 9 is a schematic flow chart of an image processing method provided by an embodiment of the present application;
fig. 10 is an effect schematic diagram of an image processing method provided according to an embodiment of the present application;
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the embodiments of the present application, the following terms "first", "second", "third", "fourth" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
In order to facilitate understanding of embodiments of the present application, related concepts related to the embodiments of the present application will be briefly described first.
1. Semantic segmentation
Semantic segmentation (Semantic segmentation) refers to the process of linking each pixel in an image to a class label.
2. Automatic white balance (Auto White Balance, AWB)
Automatic white balancing is used to enable a white camera to restore it to white at any color temperature; white paper is yellow under low color temperature and blue under high color temperature due to the influence of the color temperature; the purpose of the white balance is to make a white object appear white at any color temperature, r=g=b.
3. Noise reduction processing
Noise reduction is used to reduce noise in the image; noise present in the image may affect the visual experience of the user, and the image quality of the image may be improved to some extent through the noise reduction process.
4. Saturation level
The saturation (saturation) of a color refers to the vividness of the color, also called purity.
5. Color correction (Color CorrectionMatrix, CCM)
Color correction is used to calibrate the accuracy of colors other than white in an image.
6. Correlated color temperature (correlated color temperature, CCT)
Some non-blackbody light sources can be described in terms of the color temperature of the blackbody that is most visually similar to them; this is called their correlated color temperature; the sign of the correlated color temperature is Tcp; the unit of correlated color temperature is K.
It is understood that correlated color temperature may also be referred to as color temperature estimation, or color temperature estimation value.
7. Sharpness (Sharpness)
Sharpness, sometimes referred to as "sharpness", is an indicator that reflects the sharpness of an image plane and the sharpness of edges of an image.
8. Image enhancement
Image enhancement refers to purposefully emphasizing the overall or local nature of an image, making an originally unclear image clear or emphasizing certain features of interest.
The image processing method and the electronic device in the embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 shows a hardware system suitable for use in the electronic device of the present application.
The electronic device 100 may be a cell phone, a smart screen, a tablet computer, a wearable electronic device, an in-vehicle electronic device, an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), a projector, etc., and the specific type of the electronic device 100 is not limited in the embodiments of the present application.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The configuration shown in fig. 1 does not constitute a specific limitation on the electronic apparatus 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than those shown in FIG. 1, or electronic device 100 may include a combination of some of the components shown in FIG. 1, or electronic device 100 may include sub-components of some of the components shown in FIG. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. For example, the processor 110 may include at least one of the following processing units: application processors (application processor, AP), modem processors, graphics processors (graphics processing unit, GPU), image signal processors (image signal processor, ISP), controllers, video codecs, digital signal processors (digital signal processor, DSP), baseband processors, neural-Network Processors (NPU). The different processing units may be separate devices or integrated devices. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
Illustratively, the processor 110 may be configured to perform the image processing methods of the embodiments of the present application; for example, a camera application in an electronic device is started; acquiring a first image, wherein the first image is an image of a first color space; performing first image processing on the first image to obtain a second image, wherein the second image is an image of a second color space; carrying out semantic segmentation processing on the first image to obtain a semantic segmentation result, wherein the semantic segmentation result is used for indicating semantic information in the first image; invoking target parameters based on the semantic segmentation result, wherein the target parameters correspond to the semantic segmentation result; performing second image processing on the first image based on the target parameters to obtain a target image; and carrying out fusion processing on the target image and the second image to obtain a fusion image, wherein the image quality of the fusion image is better than that of the second image.
The connection relationships between the modules shown in fig. 1 are merely illustrative, and do not constitute a limitation on the connection relationships between the modules of the electronic device 100. Alternatively, the modules of the electronic device 100 may also use a combination of the various connection manners in the foregoing embodiments.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The electronic device 100 may implement display functions through a GPU, a display screen 194, and an application processor. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
Illustratively, the display screen 194 may be used to display images or video.
Alternatively, the electronic device 100 may implement a photographing function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
For example, an ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. The ISP can carry out algorithm optimization on noise, brightness and color of the image, and can optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
Illustratively, a camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into a standard Red Green Blue (RGB), YUV, etc. format image signal. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
Illustratively, the digital signal processor is configured to process digital signals, and may process other digital signals in addition to digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Illustratively, video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, and MPEG4.
Illustratively, the gyroscopic sensor 180B may be used to determine a motion pose of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x-axis, y-axis, and z-axis) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B can also be used for scenes such as navigation and motion sensing games.
Alternatively, the acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically the x-axis, y-axis, and z-axis). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The acceleration sensor 180E may also be used to recognize the gesture of the electronic device 100 as an input parameter for applications such as landscape switching and pedometer.
Illustratively, a distance sensor 180F is used to measure distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, for example, in a shooting scene, the electronic device 100 may range using the distance sensor 180F to achieve fast focus.
Illustratively, ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
Illustratively, the fingerprint sensor 180H is used to capture a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to perform functions such as unlocking, accessing an application lock, taking a photograph, and receiving an incoming call.
Illustratively, the touch sensor 180K, also referred to as a touch device. The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a touch screen. The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor 180K may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 and at a different location than the display 194.
The hardware system of the electronic apparatus 100 is described in detail above, and the software system of the image electronic apparatus 100 is described below.
Fig. 2 is a schematic diagram of a software system of an electronic device according to an embodiment of the present application.
As shown in fig. 2, an application layer 210, an application framework layer 220, a hardware abstraction layer 230, a driver layer 240, and a hardware layer 250 may be included in the system architecture.
The application layer 210 may include camera applications, gallery, calendar, conversation, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The application framework layer 220 provides application programming interfaces (application programming interface, APIs) and programming frameworks for application programs of the application layer; the application framework layer may include some predefined functions.
For example, the application framework layer 220 may include a camera access interface; camera management and camera devices may be included in the camera access interface. Wherein camera management may be used to provide an access interface to manage the camera; the camera device may be used to provide an interface to access the camera.
The hardware abstraction layer 230 is used to abstract the hardware. For example, the hardware abstraction layer may include a camera abstraction layer and other hardware device abstraction layers; the camera hardware abstraction layer may call a camera algorithm.
For example, the hardware abstraction layer 230 includes a camera hardware abstraction layer 2301 and a camera algorithm 2302; the camera algorithm 2302 may include software algorithms for image processing.
The driver layer 240 is used to provide drivers for different hardware devices. For example, the drive layer may include a camera device drive.
The hardware layer 250 may include camera devices as well as other hardware devices.
For example, hardware layer 250 includes camera device 2501.
At present, when image processing is performed in an ISP (Internet service provider) path of electronic equipment, semantic information of an image cannot be identified; image processing can only be performed based on a set of general parameters for different areas in the image; however, when image processing is performed based on a set of parameters, the optimal effect is often not achieved for different areas in the image; for example, assuming that an image includes a near view region and a far view region, the information amount of the far view region is generally smaller than that of the near view region, if image processing is performed with the far view region as a reference, there may be a problem that the sharpness of details of the near view region is too high; if image processing is performed with the near field as a reference, there may be a problem that detailed information of the far field is small.
For example, FIG. 3 is a prior art image processing method employed by ISP pathways; the method comprises the steps of S201, S202 and S203; step S201 is to acquire a Raw image; step S202 is an ISP path process, and step S202 includes, but is not limited to: automatic white balance processing, noise reduction processing, saturation, CCM, CCT, sharpness processing, and the like; step S203 obtains a processed image. As shown in fig. 3, in the ISP image processing in the prior art, image processing can be performed by adapting multi-semantic image content to different regions in an image only by adopting a set of general parameters, that is, independent parameters cannot be adapted to different semantic regions in an image in an ISP path; since a set of common parameters is generally used for different semantic regions in an image, the respective semantic regions in the image cannot achieve the optimal effect at the same time.
Illustratively, the shooting scene shown in fig. 4 includes a first shooting object 204 and a second shooting object 205; wherein, the first shooting object 204 is far away from the electronic device, and the second shooting object is near to the electronic device 205; when the electronic device is in a photographing mode, a preview interface 206 can be displayed; the preview interface 206 includes preview images of the first photographic subject and the second photographic subject; when the ISP access is processed, if the adapted group of general parameters are based on the requirement of the near view area, namely the area where the second shooting object is located, the image area where the second shooting object is located, namely the image detail of the distant view area is less; for example, as shown in fig. 4, texture information of the first photographic subject 204 cannot be accurately displayed in the preview image of the first photographic subject 204, resulting in poor quality of the image.
In view of this, embodiments of the present application provide an image processing method that can obtain target parameters of image processing adapted to different semantic information based on a semantic segmentation result in a first image; image processing is carried out on the first image based on the target parameters, so that local enhancement processing can be carried out on different semantic areas in the image, and the obtained target image is a local enhancement image; the local enhanced image and the second image are fused to obtain a fused image, and detail information or definition in the fused image is superior to that of the second image; compared with the existing image processing method adopting a group of general parameters, the image processing method of the application does not have the problem that the local image area in the image is sacrificed because the global image is required to be balanced through the group of general parameters; parameters which are obtained based on semantic segmentation results and are adapted to semantic information in the image are obtained due to the target parameters; therefore, image processing of the first image based on the target parameter enables image enhancement of the local image area; in other words, by the image processing method, details and/or definition of different semantic areas in the fused image can be improved, so that the image quality of the fused image is improved.
The image processing method in the embodiment of the application can be applied to the fields of photographing (for example, single-view photographing, double-view photographing, photographing preview and the like), video recording, video call or other image processing; by the image processing method, after the camera module in the electronic equipment collects the image, semantic information in the image can be identified; invoking ISP parameters adapted to different semantics based on semantic information of different areas in the image; and the image processing is carried out based on ISP parameters, so that the detail information and/or definition of each region in the image can be enhanced, and the image quality is improved.
In one example, the image processing method provided by the embodiment of the application can be applied to portrait shooting; when shooting a portrait, through the image processing method of the embodiment, parameters which are suitable for the portrait can be called when ISP processing is carried out, so that high-frequency and low-frequency detail information or sharpness of the portrait after the processing is flexibly adjusted; so that the portrait in the image is more natural.
In one example, the image processing method provided by the embodiment of the application can be applied to shooting of a green plant scene; when shooting the green plants, through the image processing method of the embodiment, parameters which are suitable for the green plants can be called when ISP processing is carried out, so that the processed image comprises more detail information of the green plants, and the image quality is improved.
In one example, the image processing method provided by the embodiment of the application can be applied to shooting of a building scene; when shooting a building, through the image processing method of the embodiment, parameters which are suitable for the building can be called when ISP processing is carried out, so that the processed image comprises more building detail information; for example, contour information of a building is enhanced, and image quality is improved.
It should be understood that the foregoing is illustrative of the application scenario of the embodiments of the present application, and is not intended to limit the application scenario of the present application in any way.
The image processing method provided in the embodiment of the present application is described in detail below with reference to fig. 5 to 10.
Fig. 5 is a schematic diagram of an image processing method according to an embodiment of the present application. The image processing method may be performed by the electronic device shown in fig. 1; the method 300 includes steps S310 to S370, and steps S310 to S370 are described in detail below.
Step S310, a camera application in the electronic device is started.
For example, the user may instruct the electronic device to open the camera application by clicking on an icon of the "camera" application.
For example, when the electronic device is in the locked state, the user may instruct the electronic device to open the camera application through a gesture that slides rightward on the display screen of the electronic device. Or the electronic equipment is in a screen locking state, the screen locking interface comprises an icon of the camera application program, and the user instructs the electronic equipment to start the camera application program by clicking the icon of the camera application program. Or when the electronic equipment runs other applications, the applications have the authority of calling the camera application program; the user may instruct the electronic device to open the camera application by clicking on the corresponding control. For example, while the electronic device is running an instant messaging type application, the user may instruct the electronic device to open the camera application, etc., by selecting a control for the camera function.
It should be appreciated that the above is illustrative of the operation of opening a camera application; the camera application program can be started by voice indication operation or other operation indication electronic equipment; the present application is not limited in any way.
It should also be understood that starting a camera application may refer to running the camera application.
Step S320, acquiring a first image.
The first image is an image acquired by a camera module in the electronic equipment; the first image is an image of a first color space.
Illustratively, the first color space may refer to a Raw color space; for example, the first image may refer to a Raw image.
Step S330, performing first image processing on the first image to obtain a second image.
Wherein the second image may refer to an image of a second color space.
For example, the second color space may be a YUV color space.
Illustratively, the first image processing may refer to ISP processing of the first image with common parameters; for example, as shown in fig. 3, the processed image in fig. 3 may be a second image.
It should be appreciated that ISP processing the first image using the common parameters may refer to ISP processing the Raw image based on a common set of ISP parameters.
Alternatively, ISP processing includes, but is not limited to: automatic white balance processing, noise reduction processing, saturation processing, CCM, CCT, or sharpness processing, etc.
Alternatively, ISP parameters may include, but are not limited to: sharpening intensity parameters, parameters corresponding to high-frequency information and low-frequency information, and parameters corresponding to noise overlapping intensity.
And step S340, carrying out semantic segmentation processing on the first image to obtain a semantic segmentation result.
Illustratively, semantic segmentation processing can be performed on the Raw image based on a semantic segmentation algorithm to obtain semantic tags in the Raw image.
Alternatively, the semantic segmentation algorithm may include: region-based semantic segmentation, full convolutional network-based semantic segmentation, weakly supervised semantic segmentation, etc.
It should be appreciated that the foregoing is illustrative of a semantic segmentation algorithm; the embodiment of the application does not limit semantic segmentation processing, and can adopt any existing method for semantic segmentation processing.
For example, the first image may include semantic content such as mountain, green plant, building, etc., and then the label 1, the label 2, and the label 3 may be obtained after performing semantic segmentation processing on the first image based on the semantic segmentation algorithm; wherein tag 1 can be used to represent semantic A, namely, hiking; tag 2 can be used to represent semantic B, green plants; tag 3 may be used to represent semantics C, i.e. construction.
And step S350, calling target parameters based on the semantic segmentation result.
Illustratively, in embodiments of the present application, the parameter set may be preconfigured; a set of parameters in the parameter set may be associated with a semantic information; parameters adapted to the semantic information of the image can be called from a pre-configured parameter set based on the semantic information included in the semantic segmentation result; and when the subsequent ISP processing is carried out on the images, parameters corresponding to the semantic information are called to carry out the ISP processing on the first image.
In one example, a semantic tag may be included in the semantic segmentation result, and a set of parameters may be invoked from a pre-configured set of parameters to process the first image according to the semantic tag.
In one example, the semantic segmentation result may include at least two semantic tags, each of the two tags corresponding to one image region in the image; one target parameter may be invoked from a pre-configured set of parameters based on each of the two semantic tags.
It should be appreciated that the number of target parameters may be associated with the number of semantic tags included in the semantic segmentation result; for example, it may be that one semantic tag in the semantic segmentation result corresponds to a set of target parameters; different semantic tags correspond to different sets of target parameters.
And step S360, performing second image processing on the first image based on the target parameters to obtain a target image.
It should be understood that the target image may refer to a locally enhanced image obtained after the second image processing based on the target parameters called by the semantic segmentation result. Since the target parameter is a parameter adapted to a certain semantic meaning in the first image, even if the second image processing is performed on the entire image of the first image based on the target parameter, the image enhancement effect of the image region corresponding to the semantic meaning is superior to the image enhancement effect of the other regions.
It should also be understood that the second image processing and the first image processing may refer to image processing algorithms that operate based on different parameters; for example, it may refer to ISP algorithms that operate based on different parameters; the first image processing may refer to an algorithm of ISP path processing as shown in fig. 3, or as step S409 in image 6; the second image processing may refer to an ISP algorithm running based on parameters corresponding to the semantic information; for example, as in step S403, step S405, or step S407 in fig. 6 that follows.
Optionally, the semantic segmentation result may include at least two labels, where the at least two labels include a first label and a second label, the first label is used to indicate semantic information of a third image area in the first image, the second label is used to indicate semantic information of a fourth image area in the first image, the target parameter includes a first parameter and a second parameter, the first parameter corresponds to the first label, the second parameter corresponds to the second label, the target image includes a first target image and a second target image, and the second image processing is performed on the first image based on the target parameter to obtain the target image, and the method includes:
Performing second image processing on the first image based on the first parameter to obtain a first target image;
and performing second image processing on the first image based on the second parameter to obtain a second target image. Alternatively, the detailed implementation is described with reference to fig. 8 and 9.
Illustratively, the first image may include a green plant and a portrait therein; the semantic tags based on green plants can call a first parameter; processing the first image based on the first parameter to obtain a local enhanced image 1 of the green plant area, namely a first target image; the semantic tags based on the portraits may invoke a second parameter; and processing the first image based on the second parameter to obtain a local enhanced image 2 of the portrait area, namely a second target image.
And step S370, performing fusion processing on the target image and the second image to obtain a fusion image.
Wherein the image quality of the fused image is better than the image quality of the second image.
It should be appreciated that the image quality of the fused image being better than the image quality of the second image may be that the noise in the fused image is less than the noise in the second image; or, the fused image and the second image are evaluated by an evaluation algorithm of image quality, and the obtained evaluation result is that the image quality of the fused image is higher than that of the second image, and the like, which is not limited in any way in the application.
For example, the image quality of the fused image being better than the image quality of the second image may refer to the detail information of the fused image being better than the detail information of the second image; for example, it may be that the detail information in the fused image is more than the detail information in the second image; alternatively, it may mean that the sharpness of the fused image is better than the sharpness of the second image. For example, the detail information may include edge information, texture information, and the like of the photographic subject.
Optionally, the fusion processing may be performed on the target image and the second image based on the semantic segmentation result, to obtain a fused image.
In the embodiment of the application, when the fusion processing is performed, the local image area can be determined from the target image based on the semantic segmentation result, and the local image area in the target image and the second image are subjected to the fusion processing, so that the local enhancement of the second image is realized.
Illustratively, the first image includes green plants, portraits and buildings, and the target image may include a locally enhanced image 1, a locally enhanced image 2 and a locally enhanced image 3; the local enhancement image 1 is an image enhanced by a green plant image area, the local enhancement image 2 is an image enhanced by a portrait image area, and the local enhancement image 3 is an image enhanced by a building image area; the image area where the green plants are located in the local enhanced image 1, the image area where the human images are located in the local enhanced image 2, the image area where the buildings are located in the local enhanced image 3 and the second image can be fused, so that the fused image where the enhancement of each image area is realized is obtained.
Optionally, the fusing processing is performed on the target image and the second image based on the semantic segmentation result, so as to obtain a fused image, including:
determining a first image area in the target image based on the semantic segmentation result; determining a second image area based on the first image area, the second image area including the first image area and the second image area being larger than the first image area; and carrying out fusion processing on the second image area and the second image to obtain a fusion image.
Illustratively, when fusion processing is performed, a fusion mode of edge expansion based on semantic positions can be adopted; for example, assuming that the image area where the semantic a is located is an area 1 (3×3 image area), an area 2 (4×4 image area) may be obtained by expanding the image area outwards based on the area 1, and the area 2 and the second image are fused to obtain a fused image; wherein region 2 comprises region 1, and the area of region 2 is greater than the area of region 1.
In the embodiment of the application, a fusion mode based on semantic position edge expansion can be adopted when fusion processing is carried out, so that the fused image can be smoothly transited in different image areas, the problem that the edges of the areas in different areas in the fused image are abrupt is avoided, and the image quality of the fused image is improved.
Alternatively, the second image area may be a circular image area.
In the embodiment of the application, since the round edge is smoother than the rectangle, a round edge expansion mode can be adopted for the regular polygon or the segmentation result with edges and corners, and an edge expansion mode (resize or upsampling) with coefficient expansion can be adopted for the irregular and non-edge segmentation result, so that the problem that the fusion edge in the fusion image obtained after fusion processing is abrupt or discontinuous in color, detail and the like can be reduced, and the image quality of the fusion image is improved.
In one example, if the first image area in the target image is an angular image area, the second image area may be obtained by using a circular edge-expanding method.
For example, the center of the first image area is determined first, and a circular second image area is obtained based on the expansion of the center of the first image area.
In one example, if the first image area in the target image is an image area without an edge, the second image area may be obtained by an upsampling edge-expanding method.
When the speech segmentation result includes a plurality of semantic tags, in the case that the second image area corresponding to each semantic tag has an intersection, a fusion manner of increasing a fusion weight coefficient may be adopted when fusion processing is performed, or a fusion manner of fusing a low-priority semantic area first and then fusing a high-priority semantic area is adopted; the priority of the semantic region can be determined based on shooting requirements of the user; for example, when a user shoots a portrait, the priority of the portrait semantic is higher than that of other semantics; when the user shoots a landscape, the priority of the wind Jing Yuyi such as green planting is higher than the priority of the portrait.
It should be understood that the foregoing is illustrative of semantic priorities, and the present application is not limited in any way to determining semantic priorities. Optionally, performing fusion processing on the target image and the second image to obtain a fused image, including:
and carrying out fusion processing on the image of the first channel of the target image and the image of the first channel of the second image to obtain a fusion image.
Alternatively, the first channel may be a Y channel, or the first channel may be a UV channel.
For example, if the detail information of the fused image is to be improved, the second image and the image of the Y channel in the target image may be extracted for fusion processing.
For example, if color information of the fused image is to be improved, the second image may be extracted and fused with an image of the UV channel in the target image.
In the embodiment of the application, the target parameters of the image processing adapted to different semantic information can be obtained based on the semantic segmentation result in the first image; image processing is carried out on the first image based on the target parameters, so that local enhancement processing can be carried out on different semantic areas in the image, and a target image, namely a local enhancement image, is obtained; the local enhanced image and the second image are fused to obtain a fused image, and detail information or definition in the fused image is superior to that of the second image; compared with the existing image processing method adopting a group of general parameters, the image processing method of the application has no problem of sacrificing local image areas in the image caused by the need of balancing the image overall situation through the group of general parameters; the target parameters are parameters which are obtained based on semantic segmentation results and are adapted to semantic information in the image; therefore, image processing of the first image based on the target parameter enables image enhancement of the local image area; in other words, by the image processing method, details and/or definition of different semantic areas in the fused image can be improved, so that the image quality of the fused image is improved.
In addition, the image processing method can realize image enhancement based on different semantic information in ISP processing; compared with the method that an additional algorithm is added for image enhancement after ISP processing, the image processing method provided by the application has lower performance requirements on the electronic equipment, and can save the power consumption of the electronic equipment to a certain extent.
Implementation one
Fig. 6 is a schematic diagram of an image processing method according to an embodiment of the present application. The image processing method may be performed by the electronic device shown in fig. 1; the method 400 includes steps S401 to S412, and the following describes steps S401 to S412 in detail.
Step S401, a Raw image (an example of the first image) is acquired.
It should be understood that in embodiments of the present application, a Raw image may refer to an image of a Raw color space.
Optionally, the obtained Raw image may be a single frame Raw image, or may also be a multi-frame Raw image; the present application is not limited in any way.
Step S402, semantic segmentation processing.
It should be understood that semantic segmentation refers to the process of linking each pixel in an image to a class label.
Illustratively, semantic segmentation processing can be performed on the Raw image based on a semantic segmentation algorithm to obtain semantic tags in the Raw image.
Alternatively, the semantic segmentation algorithm may include: region-based semantic segmentation, full convolutional network-based semantic segmentation, weakly supervised semantic segmentation, etc.
It should be appreciated that the foregoing is illustrative of a semantic segmentation algorithm; the embodiment of the application does not limit semantic segmentation processing, and can adopt any existing method for semantic segmentation processing.
Alternatively, when the semantic segmentation processing is performed, the semantic segmentation processing may be performed based on a small-sized Raw image; for example, the collected Raw image may be downsampled to obtain a small-sized Raw image; and carrying out semantic recognition on the small-size Raw image through a semantic segmentation algorithm to obtain a semantic segmentation result.
For example, the Raw image may include semantic content such as mountain, green plant, building, etc., and then the label 1, the label 2, and the label 3 may be obtained after performing semantic segmentation processing on the Raw image based on a semantic segmentation algorithm; wherein tag 1 can be used to represent semantic A, namely, hiking; tag 2 can be used to represent semantic B, green plants; tag 3 may be used to represent semantics C, i.e. construction.
In the embodiment of the application, the semantic segmentation processing is performed on the Raw image to obtain semantic information in the Raw image; when the subsequent ISP access processing is performed on the Raw image, different parameters can be adapted based on different semantic information of the Raw image, and the ISP access processing is performed based on the different parameters, so that local enhancement of different areas in the image is realized, and the image quality is improved.
Step S403, ISP processing (one example of second image processing) based on the parameter 1.
It should be appreciated that parameter 1 is a parameter of the ISP process that adapts to semantic A determined from the parameter set based on semantic A; parameters may include, but are not limited to: sharpening intensity parameters, parameters corresponding to high-frequency information and low-frequency information, and parameters corresponding to noise overlapping intensity.
It should also be appreciated that the parameters included in the parameter set may be parameters that are pre-configured based on different semantic information.
It should be noted that, parameters corresponding to different semantics included in the parameter set may be parameters configured in advance based on shooting requirements of the user; for example, the different semantics may include, but are not limited to: portrait, building, mountain, green plant, sky, etc.
Illustratively, the parameter set may include tag 1-parameter 1, tag 2-parameter 2, tag 3-parameter 3, and so on; the semantic information corresponding to the tag 1 is included in the Raw image based on semantic segmentation processing, and then the parameter 1 can be determined from the parameter set based on the tag 1; ISP processing is performed on the Raw image based on the parameter 1.
Alternatively, ISP processing includes, but is not limited to: automatic white balance processing, noise reduction processing, saturation processing, CCM, CCT, or sharpness processing, etc. For example, see the following description of the related description shown in fig. 8 or fig. 9.
Wherein the automatic white balance process is used to enable the white camera to restore it to white at any color temperature; white paper is yellow under low color temperature and blue under high color temperature due to the influence of the color temperature; the purpose of the white balance is to make a white object appear white at any color temperature, r=g=b. Noise reduction is used to reduce noise in the image; noise present in the image may affect the visual experience of the user, and the image quality of the image may be improved to some extent through the noise reduction process. Saturation processing is used to improve the vividness, also known as purity, of the colors in an image. CCM refers to color correction for calibrating the accuracy of colors other than white in an image. CCT refers to a color temperature estimate in which parts of non-blackbody light sources in an image can be described by the color temperature of the blackbody with which they are most visually similar.
Step S404, a locally enhanced image 1 (an example of a target image) is obtained.
It should be understood that the locally enhanced image a may refer to a locally enhanced image obtained after ISP processing based on the parameter 1 adapted to the semantic a; since the parameter 1 is an ISP path parameter adapted to the semantic a, even if the entire image of the Raw image is subjected to ISP processing based on the parameter 1, the enhancement effect of the image region corresponding to the semantic a is superior to that of other regions.
Step S405, ISP processing (one example of second image processing) based on the parameter 2.
It should be appreciated that parameter 2 is a parameter of the ISP process that adapts to semantic B determined from the parameter set based on semantic B; parameters may include, but are not limited to: sharpening intensity parameters, parameters corresponding to high-frequency information and low-frequency information, and parameters corresponding to noise overlapping intensity.
It should also be appreciated that the parameters included in the parameter set may be parameters that are pre-configured based on different semantic information.
Illustratively, the parameter set may include tag 1-parameter 1, tag 2-parameter 2, tag 3-parameter 3, and so on; the semantic information corresponding to the tag 2 is included in the Raw image based on semantic segmentation processing, and then the parameter 2 can be determined from the parameter set based on the tag 2; ISP processing is performed on the Raw image based on the parameter 2.
Alternatively, ISP processing includes, but is not limited to: automatic white balance processing, noise reduction processing, saturation processing, CCM, CCT, or sharpness processing, etc. For example, see the following figures 8 or 9.
Step S406 obtains a locally enhanced image 2 (an example of a target image).
It should be understood that the locally enhanced image 2 may refer to an enhanced image obtained after ISP processing based on the parameter 2 adapted to the semantic B; since the parameter 2 is an ISP path parameter adapted to the semantic B, even if the entire image of the Raw image is subjected to ISP processing based on the parameter 2, the enhancement effect of the image region corresponding to the semantic B is better than that of other regions.
Step S407, ISP processing (one example of second image processing) based on the parameter 3.
It should be appreciated that parameter 3 is a parameter of the ISP process that adapts to the semantic C, determined from the parameter set based on the semantic C; parameters may include, but are not limited to: sharpening intensity parameters, parameters corresponding to high-frequency information and low-frequency information, and parameters corresponding to noise overlapping intensity.
It should also be appreciated that the parameters included in the parameter set may be parameters that are pre-configured based on different semantic information.
Illustratively, the parameter set may include tag 1-parameter 1, tag 2-parameter 2, tag 3-parameter 3, and so on; the semantic information corresponding to the tag 3 is included in the Raw image based on semantic segmentation processing, and then the parameter 3 can be determined from the parameter set based on the tag 3; ISP processing is performed on the Raw image based on the parameter 3.
Alternatively, ISP processing includes, but is not limited to: automatic white balance processing, noise reduction processing, saturation processing, CCM, CCT, or sharpness processing, etc. For example, see the following figures 8 or 9.
Step S408, a locally enhanced image 3 (an example of a target image) is obtained.
It should be understood that the locally enhanced image 3 may refer to an enhanced image obtained after ISP processing based on the parameter 3 adapted to the semantics C; since the parameter 3 is an ISP path parameter adapted to the semantic C, even if the entire image of the Raw image is subjected to ISP processing based on the parameter 3, the enhancement effect of the image region corresponding to the semantic C is better than that of other regions.
In one example, the Raw image includes green plants, figures, buildings, etc.; wherein, the semantic A can be green plants, the semantic B can be portraits, and the semantic C can be buildings; ISP processing is carried out on the Raw image based on the parameter 1, namely the parameter which is adapted to the green plant image area, so that the local enhancement of the image area where the green plant is positioned is realized; for example, detail information of the green plant area may be enhanced; ISP processing is carried out on the Raw image based on the parameter 2, namely the parameter which is adapted to the image area of the portrait, so that the local enhancement of the image area where the portrait is positioned is realized; for example, the image area where the portrait is located can be made more natural, and the introduction of excessive distortion of facial textures is avoided; ISP processing is carried out on the Raw image based on the parameter 3, namely the parameter which is adapted to the building image area, so that the local enhancement of the image area where the building is positioned is realized; for example, the sharpness of the image area of the building may be increased, making the lines of the image area of the building more visible.
Alternatively, when the ISP processing is performed based on the parameter 1, the parameter 2, or the parameter 3, the ISP processing may be performed on the whole Raw image; or, ISP processing can be performed on the image areas where the different semantics are located; the present application is not limited in any way.
For example, ISP processing is performed on the image area corresponding to the semantic A based on the parameter 1; ISP processing is carried out on the image area corresponding to the semantic B based on the parameter 2; and carrying out ISP processing on the image area corresponding to the semantic C based on the parameter 3.
Alternatively, the steps S403 and S404, the steps S405 and S406, the steps S407 and S408 may be performed in parallel; for example, the ISP processing of the Raw image based on the parameter 1, the ISP processing of the Raw image based on the parameter 2, and the ISP processing of the Raw image based on the parameter 3 may be performed simultaneously.
Step S409, ISP processing (one example of first image processing) employing general parameters.
For example, the Raw image may be ISP processed based on a common set of parameters using existing ISP access algorithms.
Step S410 obtains a reference image (an example of the second image).
Illustratively, the reference image may refer to an image obtained by processing the Raw image by using an existing ISP access algorithm; for example, when performing ISP processing on a Raw image based on a conventional ISP access algorithm, the ISP processing is performed using a set of common parameters for image areas corresponding to respective semantics in the Raw image.
Alternatively, the reference image may refer to the reference image as a reference of the fusion process when step S411 is performed.
Step S411, fusion processing.
Illustratively, the reference image, the locally enhanced image 1, the locally enhanced image 2, and the locally enhanced image 3 may be subjected to fusion processing based on semantic information in the image and position information of different semantics.
For example, an image region 1 corresponding to the semantic a is determined in the partial enhanced image 1, an image region 2 corresponding to the semantic B is determined in the partial enhanced image 2, and an image region 3 corresponding to the semantic C is determined in the partial enhanced image 3; the image area 1, the image area 2, and the image area 3 are fused with the reference image as a reference.
In one example, the Raw image comprises green plants, figures and buildings, the local enhancement image 1 is an image enhanced by a green plant image area, the local enhancement image 2 is an image enhanced by a figure image area, and the local enhancement image 3 is an image enhanced by a building image area; the image area where the green plants are located in the local enhancement image 1, the image area where the human images are located in the local enhancement image 2 and the image area where the buildings are located in the local enhancement image 3 can be fused into the reference image, so that a global enhancement image in which enhancement is realized in each image area is obtained.
Alternatively, the reference image, the locally enhanced image 1, the locally enhanced image 2, and the locally enhanced image 3 may be YUV images; if the detail information of the image is to be improved, the reference image, the local enhancement image 1, the local enhancement image 2 and the Y-channel image in the local enhancement image 3 can be extracted for fusion processing.
Optionally, the reference image, the locally enhanced image 1, the locally enhanced image 2 and the locally enhanced image 3 may be YUV images, and if color information of the images is to be enhanced, the images of UV channels in the reference image, the locally enhanced image 1, the locally enhanced image 2 and the locally enhanced image 3 may be extracted for fusion processing.
And step S412, obtaining a fusion image.
For example, the fused image may refer to an image obtained by performing fusion processing on a locally enhanced image obtained by performing ISP processing on an image area based on semantic information and a reference image.
It should be understood that the above steps S401 to S410 are illustrated with three semantic information included in the Raw image; the amount of semantic information in the Raw image is not limited in any way.
In the embodiment of the application, ISP parameters adapted to different semantic information can be obtained based on the semantic information in the Raw image; ISP processing is carried out based on ISP parameters corresponding to different semantic information, so that ISP processing can be carried out aiming at different semantic areas in the image, and a locally enhanced image is obtained; carrying out fusion processing on the locally enhanced image and the reference image to obtain a fusion image; compared with the prior art that ISP processing is performed by adopting a group of general ISP parameters, the image processing method of the application does not have the problem that the local image area in the image is sacrificed because the global image is required to be balanced through the group of general parameters; by the image processing method, details and/or definition of different semantic areas in the fusion image can be improved, so that image quality of the fusion image is improved.
In addition, the image processing method of the application realizes image enhancement based on different semantic information in ISP path processing; compared with the method that an additional algorithm is added for image enhancement after ISP access processing, the image processing method provided by the application has lower performance requirements on the electronic equipment, and can save the power consumption of the electronic equipment to a certain extent.
Implementation II
Illustratively, the above-mentioned fig. 6 is illustrated by executing step S403, step S405 and step S407 in parallel; in one example, ISP processing based on ISP parameters corresponding to different semantic information may also be performed serially, as shown in FIG. 7.
Fig. 7 is a schematic diagram of an image processing method according to an embodiment of the present application. The image processing method may be performed by the electronic device shown in fig. 1; the method 500 includes steps S501 to S509, and steps S501 to S509 are described in detail below.
Step S501, acquiring a Raw image.
It should be understood that in embodiments of the present application, a Raw image may refer to an image of a Raw color space.
Optionally, the obtained Raw image may be a single frame Raw image, or may also be a multi-frame Raw image; the present application is not limited in any way.
Step S502, semantic segmentation processing.
It should be understood that semantic segmentation refers to the process of linking each pixel in an image to a class label.
Illustratively, semantic segmentation processing can be performed on the Raw image based on a semantic segmentation algorithm to obtain semantic tags in the Raw image.
Alternatively, the semantic segmentation algorithm may include: region-based semantic segmentation, full convolutional network-based semantic segmentation, weakly supervised semantic segmentation, etc.
It should be appreciated that the foregoing is illustrative of a semantic segmentation algorithm; the embodiment of the application does not limit semantic segmentation processing, and can adopt any existing method for semantic segmentation processing.
For example, the Raw image may include semantic content such as mountain, green plant, building, etc., and then the label 1, the label 2, and the label 3 may be obtained after performing semantic segmentation processing on the Raw image based on a semantic segmentation algorithm; wherein tag 1 can be used to represent semantic A, namely, hiking; tag 2 can be used to represent semantic B, green plants; tag 3 may be used to represent semantics C, i.e. construction.
In the embodiment of the application, the semantic segmentation processing is performed on the Raw image to obtain semantic information in the Raw image; when the subsequent ISP access processing is performed on the Raw image, different parameters can be adapted based on different semantic information of the Raw image, and the ISP access processing is performed based on the different parameters, so that local enhancement of different areas in the image is realized, and the image quality is improved.
Step S503, ISP processing is performed based on the semantics of the ith area.
Illustratively, performing ISP processing based on the semantics of the ith region may refer to determining a set of parameters adapted to the semantics information of the ith region from a pre-configured set of parameters based on the semantics information of the ith region; ISP processing is performed based on the set of parameters.
It should be appreciated that the parameters of the ISP processing corresponding to the different semantic regions in the image may be different; parameters of IPS processing may include, but are not limited to: sharpening intensity parameters, parameters corresponding to high-frequency information and low-frequency information, and parameters corresponding to noise overlapping intensity.
Alternatively, when performing ISP processing based on semantic information, ISP processing may be performed on the entirety of the Raw image; or, ISP processing can be performed on the image areas where the different semantics are located; the present application is not limited in any way.
Alternatively, ISP processing includes, but is not limited to: automatic white balance processing, noise reduction processing, saturation processing, CCM, CCT, or sharpness processing, etc. For example, see the following figures 8 or 9.
Wherein the automatic white balance process is used to enable the white camera to restore it to white at any color temperature; white paper is yellow under low color temperature and blue under high color temperature due to the influence of the color temperature; the purpose of the white balance is to make a white object appear white at any color temperature, r=g=b. Noise reduction is used to reduce noise in the image; noise present in the image may affect the visual experience of the user, and the image quality of the image may be improved to some extent through the noise reduction process. Saturation processing is used to improve the vividness, also known as purity, of the colors in an image. CCM refers to color correction for calibrating the accuracy of colors other than white in an image. CCT refers to a color temperature estimate in which parts of non-blackbody light sources in an image can be described by the color temperature of the blackbody with which they are most visually similar.
And step S504, obtaining an i-th frame local area enhanced image.
It should be understood that the i-th frame local area enhanced image refers to an enhanced image obtained after ISP processing based on a parameter adapted to a certain semantic correspondence; since the parameter is an ISP path parameter adapted to a certain semantic meaning in the image, even if the entire image of the Raw image is ISP processed based on the parameter, the enhancement effect of the image region corresponding to the semantic meaning is better than that of other regions.
In one example, the Raw image includes green plants, figures and buildings; the semantics of the i-th region may refer to a portrait, and the i-th frame local region enhanced image may refer to an image obtained by performing ISP processing on the Raw image based on a set of parameters of ISP processing corresponding to the portrait, so as to obtain the image with locally enhanced image region where the portrait is located.
Step S505, the i-th frame local enhanced image is stored.
It should be noted that, since the image processing method shown in fig. 7 is to perform ISP processing based on ISP parameters corresponding to different semantic information for serial execution; therefore, the image area corresponding to each semantic information in the image needs to be processed respectively based on different parameters; after processing an image region, there may be a resulting locally enhanced image.
Step S506, ISP processing adopting general parameters.
By way of example, conventional ISP path algorithms may be employed to ISP process Raw images based on a common set of parameters; for example, when performing ISP processing on a Raw image based on a conventional ISP access algorithm, the ISP processing is performed using a set of common parameters for image areas corresponding to respective semantics in the Raw image.
Step S507, obtaining a reference image.
Illustratively, the reference image may refer to an image obtained by processing the Raw image using an existing ISP access algorithm.
Alternatively, the reference image may refer to the reference image as a reference of the fusion process when step S508 is performed.
Step S508, fusion processing.
For example, the reference image and the i-frame locally enhanced image may be subjected to fusion processing based on semantic information in the image.
In one example, the Raw image includes green plants, figures and buildings; the i-frame local enhancement image comprises 3 frames of local enhancement images, namely a green plant area enhancement image, a portrait area enhancement image and a building area enhancement image; the image area where the green plants are located in the green plant area enhanced image, the image area where the human images are located in the human image area enhanced image, the image area where the buildings are located in the building area enhanced image and the reference image can be fused, so that a global enhanced image in which enhancement is realized in each image area is obtained.
Optionally, the reference image and the i-frame local enhancement image may be YUV images; if the detail information of the image is to be improved, the reference image and the Y-channel image in the i-frame local enhancement image can be extracted for fusion processing.
Optionally, the reference image and the i-frame local area enhanced image may be YUV images, and if color information of the images is to be improved, images of UV channels in the reference image and the i-frame local area enhanced image may be extracted for fusion processing.
Step S509, obtaining a fusion image.
Illustratively, the fused image may refer to an image obtained by fusing the i-frame locally enhanced image with the reference image.
In the embodiment of the application, ISP parameters adapted to different semantic information can be obtained based on the semantic information in the Raw image; ISP processing is carried out based on ISP parameters corresponding to different semantic information, so that ISP processing can be carried out aiming at different semantic areas in the image, and a locally enhanced image is obtained; carrying out fusion processing on the locally enhanced image and the reference image to obtain a fusion image; compared with the prior art that ISP processing is performed by adopting a group of general ISP parameters, the image processing method of the application does not have the problem that the local image area in the image is sacrificed because the global image is required to be balanced through the group of general parameters; by the image processing method, details and/or definition of different semantic areas in the fusion image can be improved, so that image quality of the fusion image is improved.
In addition, the image processing method of the application realizes image enhancement based on different semantic information in ISP path processing; compared with the method that an additional algorithm is added for image enhancement after ISP access processing, the image processing method provided by the application has lower performance requirements on the electronic equipment, and can save the power consumption of the electronic equipment to a certain extent.
The specific flow of the image processing method according to the embodiment of the present application is described in detail below with reference to fig. 8 and 9.
Fig. 8 is a schematic diagram of an image processing method according to an embodiment of the present application. The image processing method may be performed by the electronic device shown in fig. 1; the method 700 includes steps S701 to S721, and the steps S701 to S721 are described in detail below.
Step S701, acquiring a Raw image.
It should be understood that in embodiments of the present application, a Raw image may refer to an image of a Raw color space.
Optionally, the obtained Raw image may be a single frame Raw image, or may also be a multi-frame Raw image; the present application is not limited in any way.
Step S702, semantic segmentation processing.
Illustratively, semantic segmentation processing can be performed on the Raw image based on a semantic segmentation algorithm to obtain semantic tags in the Raw image.
Alternatively, the semantic segmentation algorithm may include: region-based semantic segmentation, full convolutional network-based semantic segmentation, weakly supervised semantic segmentation, etc.
It should be appreciated that the foregoing is illustrative of a semantic segmentation algorithm; the embodiment of the application does not limit semantic segmentation processing, and can adopt any existing method for semantic segmentation processing.
Illustratively, performing ISP processing based on the parameters corresponding to the semantics a may include steps S703 to S706.
Step S703, a first automatic white balance process.
It should be appreciated that the automatic white balance process is used to enable the white to be restored to white by the camera at any color temperature; white paper is yellow under low color temperature and blue under high color temperature due to the influence of the color temperature; the purpose of the white balance is to make a white object appear white at any color temperature, r=g=b.
Illustratively, the first automatic white balance processing may refer to a white balance algorithm executed based on parameters corresponding to the semantic a call white balance processing in the Raw image.
Step S704, first noise reduction processing.
It should be appreciated that noise reduction is used to reduce noise in the image; noise present in the image may affect the visual experience of the user, and the image quality of the image may be improved to some extent through the noise reduction process.
Illustratively, the first noise reduction process may refer to a noise reduction algorithm executed based on parameters corresponding to the semantic a call noise reduction process in the Raw image.
For example, the parameters corresponding to noise processing may include parameters corresponding to noise aliasing intensity.
Step S705, first color processing.
Alternatively, the first color process may include a saturation process, a color correction process, or a color temperature estimation; among them, saturation processing is used to improve vividness of colors in an image, also called purity. The color correction process is used to calibrate the accuracy of colors other than white in an image. By color temperature estimation is meant that in an image some non-blackbody light sources can be described by the color temperature of the blackbody with which they are most visually similar.
Illustratively, the first color process may refer to a related algorithm of image color process performed based on parameters corresponding to semantic a call color process in the Raw image.
Step S706, a first sharpness process.
It should be appreciated that sharpness, which may also sometimes be referred to as "sharpness," is an indicator that reflects the sharpness of the image plane and the sharpness of the edges of the image.
Illustratively, the first sharpness process may refer to a sharpness processing algorithm that is executed based on parameters corresponding to the semantic a call sharpness process in the Raw image.
For example, the parameters corresponding to the sharpness processing may include sharpening strength parameters.
Step S707, obtaining a local enhanced image based on the parameters corresponding to the semantic A.
It should be understood that the locally enhanced image obtained based on the parameter corresponding to the semantic a may refer to an image obtained by performing ISP processing on the Raw image based on the parameter corresponding to the semantic a; because the parameter called based on the semantic A is the ISP parameter adapted to the semantic A, even if the whole image of the Raw image is ISP processed based on the parameter, the enhancement effect of the image area corresponding to the semantic A is better than that of other areas; therefore, the image obtained after ISP processing based on the parameters corresponding to the semantic A can be the image with the local enhancement of the area where the semantic A is located.
Optionally, ISP processing may also be performed on the image area where the semantic a is located based on the parameter corresponding to the semantic a.
Illustratively, performing ISP processing based on parameters corresponding to the semantics B may include steps S708 to S711.
Alternatively, the image processing may be performed on the entirety of the Raw image based on steps S708 to S711.
Alternatively, image processing may be performed on a partial image area corresponding to the semantic B in the Raw image based on steps S708 to S711.
It should be understood that steps S708 to S711 in the ISP process are exemplified below; the specific steps in the ISP process are not limited in this application.
Step S708, a second automatic white balance process.
It should be appreciated that the automatic white balance process is used to enable the white to be restored to white by the camera at any color temperature; white paper is yellow under low color temperature and blue under high color temperature due to the influence of the color temperature; the purpose of the white balance is to make a white object appear white at any color temperature, r=g=b.
Illustratively, the second automatic white balance processing may refer to a white balance algorithm executed based on parameters corresponding to the semantic B call white balance processing in the Raw image.
Step S709, a second noise reduction process.
It should be appreciated that noise reduction is used to reduce noise in the image; noise present in the image may affect the visual experience of the user, and the image quality of the image may be improved to some extent through the noise reduction process.
The second noise reduction process may refer to a noise reduction algorithm performed based on parameters corresponding to the semantic B call noise reduction process in the Raw image, for example.
For example, the parameters corresponding to noise processing may include parameters corresponding to noise aliasing intensity.
Step S710, second color processing.
Alternatively, the second color process may include a saturation process, a color correction process, or a color temperature estimation; among them, saturation processing is used to improve vividness of colors in an image, also called purity. The color correction process is used to calibrate the accuracy of colors other than white in an image. By color temperature estimation is meant that in an image some non-blackbody light sources can be described by the color temperature of the blackbody with which they are most visually similar.
Illustratively, the second color process may refer to a related algorithm of image color process performed based on parameters corresponding to the semantic B call color process in the Raw image.
Step S711, second sharpness processing.
It should be appreciated that sharpness, which may also sometimes be referred to as "sharpness," is an indicator that reflects the sharpness of the image plane and the sharpness of the edges of the image.
Illustratively, the second sharpness processing may refer to a sharpness processing algorithm that is executed based on parameters corresponding to the invoking sharpness processing of semantic B in the Raw image.
For example, the parameters corresponding to the sharpness processing may include sharpening strength parameters.
Step S712, obtaining a local enhanced image based on the parameters corresponding to the semantics B.
It should be understood that the locally enhanced image obtained based on the parameter corresponding to the semantic B may refer to an image obtained by performing ISP processing on the Raw image based on the parameter corresponding to the semantic B; because the parameter called based on the semantic B is the ISP parameter adapted to the semantic B, even if the whole image of the Raw image is ISP processed based on the parameter, the enhancement effect of the image area corresponding to the semantic B is better than that of other areas; therefore, the image obtained after ISP processing based on the parameters corresponding to the semantic B can be the image with the local enhancement of the area where the semantic B is located.
Alternatively, ISP processing may be performed on the image area where the semantic B is located based on the parameter corresponding to the semantic B.
Illustratively, performing ISP processing based on the parameters corresponding to the semantics C may include steps S713 to S716.
Alternatively, the entirety of the Raw image may be subjected to image processing based on steps S713 to S716.
Alternatively, image processing may be performed on a partial image area corresponding to the semantic C in the Raw image based on steps S713 to S716.
It should be understood that steps S713 to S716 in the ISP process are exemplified below; the specific steps in the ISP process are not limited in this application.
Step S713, a third automatic white balance process.
Illustratively, the third automatic white balance processing may refer to a white balance algorithm executed based on parameters corresponding to the semantic C call white balance processing in the Raw image.
Step S714, third noise reduction processing.
For example, the third noise reduction process may refer to a noise reduction algorithm executed based on parameters corresponding to the semantic C call noise reduction process in the Raw image.
For example, the parameters corresponding to noise processing may include parameters corresponding to noise aliasing intensity.
Step S715, third color processing.
Alternatively, the third color process may include a saturation process, a color correction process, or a color temperature estimation; among them, saturation processing is used to improve vividness of colors in an image, also called purity. The color correction process is used to calibrate the accuracy of colors other than white in an image. By color temperature estimation is meant that in an image some non-blackbody light sources can be described by the color temperature of the blackbody with which they are most visually similar.
Illustratively, the third color process may refer to a related algorithm of image color processing performed based on parameters corresponding to the semantic C call color process in the Raw image.
Step S716, third sharpness processing.
It should be appreciated that sharpness, which may also sometimes be referred to as "sharpness," is an indicator that reflects the sharpness of the image plane and the sharpness of the edges of the image.
Illustratively, the third sharpness processing may refer to a sharpness processing algorithm that is executed based on parameters corresponding to the sharpness processing called for by semantic C in the Raw image.
For example, the parameters corresponding to the sharpness processing may include sharpening strength parameters.
It should be appreciated that the first automatic white balance process, the second automatic white balance process, or the third automatic white balance process described above may be an automatic white balance algorithm that is executed based on different algorithm parameters. Similarly, the above-described first noise reduction process, second noise reduction process, or third noise reduction process may be a noise reduction processing algorithm executed based on different parameters; similarly, the above-described first color process, second color process, or third color process may be a color process algorithm executed based on different algorithm parameters; similarly, the first sharpness process, the second sharpness process, or the third sharpness process described above may be sharpness processing algorithms that are executed based on different algorithm parameters.
Step S717, obtaining a local enhanced image based on the parameters corresponding to the semantics C.
It should be understood that the locally enhanced image obtained based on the parameter corresponding to the semantic C may refer to an image obtained by performing ISP processing on the Raw image based on the parameter corresponding to the semantic C; because the parameter called based on the semantic C is the ISP parameter adapted to the semantic C, even if the whole image of the Raw image is ISP processed based on the parameter, the enhancement effect of the image area corresponding to the semantic C is better than that of other areas; therefore, the image obtained after ISP processing based on the parameters corresponding to the semantics C can be the image with the local enhancement of the area where the semantics C is located.
Optionally, ISP processing may also be performed on the image area where the semantic C is located based on the parameter corresponding to the semantic C.
Step S718, ISP processing using general parameters.
By way of example, conventional ISP path algorithms may be employed to ISP process Raw images based on a common set of parameters; for example, when performing ISP processing on a Raw image based on a conventional ISP access algorithm, the ISP processing is performed using a set of common parameters for image areas corresponding to respective semantics in the Raw image.
Step S719, obtaining a reference image.
Illustratively, the reference image may refer to an image obtained by processing the Raw image using an existing ISP access algorithm.
Alternatively, the reference image may refer to the reference image as a reference of the fusion process when step S720 is performed.
Step S720, fusion processing based on semantic regions.
Illustratively, in the fusion processing, a fusion mode of edge expansion can be performed based on the semantic position; for example, assuming that the image area where the semantic A is located is an area 1, an area 2 can be obtained by expanding the image area outwards on the basis of the area 1, and fusion processing is performed on the area 2 and a reference image; wherein region 2 includes the image content of region 1, and the area of region 2 is greater than the area of region 1.
In the embodiment of the application, the fusion mode of edge expansion based on the semantic position is adopted during fusion processing, so that the obtained fusion image can be smoothly transited in different image areas, the problem that the edges of the areas in the different areas in the image are abrupt is avoided, and the image quality of the fusion image is improved.
Optionally, the fusion mode of edge expansion based on the semantic position may be an edge expansion mode adopting a circle; for example, assuming that the image area where the semantic meaning a is located is the area 1, a circular area 2 can be obtained by expanding the image area outwards on the basis of the area 1, the area 2 includes the area 1, and the area of the area 2 is larger than that of the area 1.
In the embodiment of the application, the round edge is smoother than the rectangle, so that the problem that the fusion edge in the fusion image obtained after fusion processing is abrupt or discontinuous in color, detail and the like can be reduced by adopting a round edge expanding mode, and the image quality of the fusion image is improved.
In one example, if the first image area in the target image is an angular image area, the second image area may be obtained by using a circular edge-expanding method.
For example, the center of the first image area may be determined first, and the second image area may be obtained by expanding the center of the first image area to the periphery.
In one example, if the first image region in the target image is an image region without an edge, the second image region may be obtained by using a side-expansion method (or upsampling) in which the multiplication coefficient is enlarged.
When the speech segmentation result includes a plurality of semantic tags, in the case that the second image area corresponding to each semantic tag has an intersection, a fusion manner of increasing a fusion weight coefficient may be adopted when fusion processing is performed, or a fusion manner of fusing a low-priority semantic area first and then fusing a high-priority semantic area is adopted; the priority of the semantic region can be determined based on shooting requirements of the user; for example, when a user shoots a portrait, the priority of the portrait semantic is higher than that of other semantics; when the user shoots a landscape, the priority of the wind Jing Yuyi such as green planting is higher than the priority of the portrait.
It should be understood that the foregoing is illustrative of semantic priorities, and the present application is not limited in any way to determining semantic priorities. When the speech segmentation result includes a plurality of semantic tags, in the case that the second image area corresponding to each semantic tag has an intersection, a fusion manner of increasing a fusion weight coefficient may be adopted when fusion processing is performed, or a fusion manner of fusing a low-priority semantic area first and then fusing a high-priority semantic area is adopted; the priority level can be determined based on shooting requirements of the user; for example, when a user shoots a portrait, the priority of the portrait semantic is higher than that of other semantics; when the user shoots a landscape, the priority of the wind Jing Yuyi such as green planting is higher than the priority of a portrait.
Optionally, if the detail information of the fused image is to be promoted, the reference image and the Y-channel image in each local enhanced image may be extracted for fusion processing.
Alternatively, if the color information of the fused image is to be enhanced, the fusion processing may be performed by extracting the image of the UV channel in each of the locally enhanced images and the reference image.
For example, when the image of the UV channel of the reference image is fused with the image of the UV channel of each locally enhanced image, the UV channel fusion can be performed under the condition of setting the Y channel threshold according to semantic requirements, so that the problem of partial color loss in the fused image is avoided.
Optionally, when the reference image and each local enhanced image are fused based on the channel, an edge-expanding fusion mode based on the semantic position can be adopted; for example, in the channel image of the locally enhanced image, assuming that the image area where the semantic A is located is an area 1, an area 2 can be obtained by expanding the image area outwards on the basis of the area 1, and fusion processing is performed on the area 2 and the reference image; wherein region 2 includes the image content of region 1, and the area of region 2 is greater than the area of region 1.
Step S721, obtaining a fusion image.
Illustratively, the fused image may refer to an image obtained by fusing the locally enhanced image with the reference image.
In the embodiment of the application, ISP parameters adapted to different semantic information can be obtained based on the semantic information in the Raw image; ISP processing is carried out based on ISP parameters corresponding to different semantic information, so that ISP processing can be carried out aiming at different semantic areas in the image, and a locally enhanced image is obtained; carrying out fusion processing on the locally enhanced image and the reference image to obtain a fusion image; compared with the prior art that ISP processing is performed by adopting a group of general ISP parameters, the image processing method of the application does not have the problem that the local image area in the image is sacrificed because the global image is required to be balanced through the group of general parameters; by the image processing method, details and/or definition of different semantic areas in the fusion image can be improved, so that image quality of the fusion image is improved.
In addition, the image processing method of the application realizes image enhancement based on different semantic information in ISP path processing; compared with the method that an additional algorithm is added for image enhancement after ISP access processing, the image processing method provided by the application has lower performance requirements on the electronic equipment, and can save the power consumption of the electronic equipment to a certain extent. Compared with semantic recognition and image restoration based on a neural network, the image processing algorithm provided by the embodiment of the application is simpler and more convenient, and has lower demand on the computing example of the electronic equipment.
It should be understood that the above steps S701 to S721 are exemplified by including three semantic information in the image; the amount of semantic information in the image is not limited in any way; the number of multiple ISP path branches may be dynamically adjusted based on the number of semantics in the Raw image.
Illustratively, the semantic segmentation process shown in FIG. 8 described above is performed in the Raw color space; alternatively, as shown in fig. 9, the semantic segmentation process may also be performed in the YUV color space.
Fig. 9 is a schematic diagram of an image processing method provided in an embodiment of the present application. The image processing method may be performed by the electronic device shown in fig. 1; the method 800 includes steps S801 to S817, and the steps S801 to S817 are described in detail below.
Step S801, acquiring a Raw image.
It should be understood that in embodiments of the present application, a Raw image may refer to an image of a Raw color space.
Optionally, the obtained Raw image may be a single frame Raw image, or may also be a multi-frame Raw image; the present application is not limited in any way.
Step S802, automatic white balance processing.
Illustratively, as shown in fig. 9, since the semantic segmentation process is performed in the YUV color space, there may be no need to recognize semantic information in the Raw image when performing step S802 and step S803.
Step S803, noise reduction processing.
It should be appreciated that the noise reduction process is used to reduce noise in the image; since noise existing in an image affects the visual experience of a user, the image quality of the image can be improved to some extent through noise reduction processing.
Step S804, semantic segmentation processing.
In the embodiment of the application, a Raw image can be acquired, and after the Raw image is executed with a Raw algorithm, the image in the Raw color space is converted into a YUV color space; carrying out semantic segmentation processing on the image in a YUV color space; since a base effect image can be generated in YUV color space; if the image quality of a certain image area of the basic effect image is poor, readjustment can be performed based on the area; therefore, the flexibility of semantic segmentation and subsequent image processing in the YUV color space is higher.
Illustratively, performing ISP processing based on parameters corresponding to semantic a may include step S805 and step S806.
Step S805, first color processing.
Illustratively, the first color processing may refer to a related algorithm of image color processing performed based on parameters corresponding to semantic a call color processing in the YUV image.
Alternatively, the first color process may include a saturation process, a color correction process, or a color temperature estimation; among them, saturation processing is used to improve vividness of colors in an image, also called purity. The color correction process is used to calibrate the accuracy of colors other than white in an image. By color temperature estimation is meant that in an image some non-blackbody light sources can be described by the color temperature of the blackbody with which they are most visually similar.
Step S806, a first sharpness process.
It should be appreciated that sharpness, which may also sometimes be referred to as "sharpness," is an indicator that reflects the sharpness of the image plane and the sharpness of the edges of the image.
Illustratively, the first sharpness processing may refer to a sharpness processing algorithm that is executed based on parameters corresponding to the semantic a call sharpness processing in the YUV image.
For example, the parameters corresponding to the sharpness processing may include sharpening strength parameters.
Step S807, obtaining a local enhanced image based on parameters corresponding to the semantic A.
It should be understood that the locally enhanced image obtained based on the parameter corresponding to the semantic a may refer to an image obtained by performing image processing on the YUV image based on the parameter corresponding to the semantic a; because the parameter called based on the semantic A is the ISP parameter adapted to the semantic A, even if the whole image of the YUV image is subjected to image processing based on the parameter, the enhancement effect of the image area corresponding to the semantic A is better than that of other areas; therefore, the image obtained after image processing based on the parameters corresponding to the semantic A can be the image with the local enhancement of the area where the semantic A is located.
Illustratively, performing ISP processing based on parameters corresponding to semantics B may include steps S808 and S809.
Step S808, second color processing.
Illustratively, the second color processing may refer to a related algorithm of image color processing performed based on parameters corresponding to the semantic B call color processing in the YUV image.
Alternatively, the second color process may include a saturation process, a color correction process, or a color temperature estimation, or the like.
Step S809, second sharpness processing.
Illustratively, the second sharpness processing may refer to a sharpness processing algorithm that is executed based on parameters corresponding to the semantic B call sharpness processing in the YUV image.
For example, the parameters corresponding to the sharpness processing may include sharpening strength parameters.
Step S810, obtaining a local enhanced image based on parameters corresponding to the semantics B.
It should be understood that the locally enhanced image obtained based on the parameter corresponding to the semantic B may refer to an image obtained by performing image processing on the YUV image based on the parameter corresponding to the semantic B; because the parameter called based on the semantic B is the ISP parameter adapted to the semantic B, even if the whole image of the YUV image is subjected to image processing based on the parameter, the enhancement effect of the image area corresponding to the semantic B is better than that of other areas; therefore, the image obtained after image processing based on the parameters corresponding to the semantic B can be the image with the local enhancement of the area where the semantic B is located.
Illustratively, performing ISP processing based on parameters corresponding to semantics C may include step S811 and step S812.
Step S811, third color processing.
Illustratively, the third color processing may refer to a related algorithm of image color processing performed based on parameters corresponding to the semantic C call color processing in the YUV image.
Alternatively, the third color process may include a saturation process, a color correction process, or a color temperature estimation, or the like.
Step S812, third sharpness processing.
Illustratively, the second sharpness processing may refer to a sharpness processing algorithm performed based on parameters corresponding to the semantic C call sharpness processing in the YUV image.
For example, the parameters corresponding to the sharpness processing may include sharpening strength parameters.
Step S813, obtaining a local enhanced image based on the parameters corresponding to the semantics C.
It should be understood that the locally enhanced image obtained based on the parameter corresponding to the semantic C may refer to an image obtained by performing image processing on the YUV image based on the parameter corresponding to the semantic C; because the parameter called based on the semantic C is the ISP parameter adapted to the semantic C, even if the whole image of the YUV image is subjected to image processing based on the parameter, the enhancement effect of the image area corresponding to the semantic C is better than that of other areas; therefore, the image obtained after image processing based on the parameters corresponding to the semantics C can be the image with the local enhancement of the area where the semantics C is located.
Step S814, ISP processing using general parameters.
By way of example, conventional ISP path algorithms may be employed to ISP process Raw images based on a common set of parameters; for example, when performing ISP processing on a Raw image based on a conventional ISP access algorithm, the ISP processing is performed using a set of common parameters for image areas corresponding to respective semantics in the Raw image.
Step S815, obtaining a reference image.
Illustratively, the reference image may refer to an image obtained by processing the Raw image using an existing ISP access algorithm.
Alternatively, the reference image may refer to the reference image as a reference of the fusion process when step S816 is performed.
Step S816, fusion processing based on semantic regions.
Illustratively, in the fusion processing, a fusion mode of edge expansion can be performed based on the semantic position; for example, assuming that the image area where the semantic A is located is an area 1, the image area can be expanded outwards on the basis of the area 1 to obtain an area 2; the fusion processing is performed on the region 2 and the reference image, wherein the region 2 comprises the image content of the region 1, and the area of the region 2 is larger than that of the region 1.
In the embodiment of the application, the fusion mode of edge expansion based on the semantic position is adopted during fusion processing, so that the obtained fusion image can be smoothly transited in different image areas, the problem that the edges of the areas in the different areas in the image are abrupt is avoided, and the image quality of the fusion image is improved.
Optionally, the fusion mode of edge expansion based on the semantic position may be an edge expansion mode adopting a circle; for example, assuming that the image area where the semantic meaning a is located is the area 1, a circular area 2 can be obtained by expanding the image area outwards on the basis of the area 1, the area 2 includes the area 1, and the area of the area 2 is larger than that of the area 1.
In the embodiment of the application, the round edge is smoother than the rectangle, so that the problem that the fusion edge in the fusion image obtained after fusion processing is abrupt or discontinuous in color, detail and the like can be reduced by adopting a round edge expanding mode, and the image quality of the fusion image is improved.
In one example, if the first image area in the target image is an angular image area, the second image area may be obtained by using a circular edge-expanding method.
For example, the center of the first image area is determined first, and a circular second image area is obtained based on the expansion of the center of the first image area.
In one example, if the first image region in the target image is an image region without an edge, the second image region may be obtained by using a side-expansion method (or upsampling) in which the multiplication coefficient is enlarged.
When the speech segmentation result includes a plurality of semantic tags, in the case that the second image area corresponding to each semantic tag has an intersection, a fusion manner of increasing a fusion weight coefficient may be adopted when fusion processing is performed, or a fusion manner of fusing a low-priority semantic area first and then fusing a high-priority semantic area is adopted; the priority of the semantic region can be determined based on shooting requirements of the user; for example, when a user shoots a portrait, the priority of the portrait semantic is higher than that of other semantics; when the user shoots a landscape, the priority of the wind Jing Yuyi such as green planting is higher than the priority of the portrait.
It should be understood that the foregoing is illustrative of semantic priorities, and the present application is not limited in any way to determining semantic priorities.
Optionally, if the detail information of the fused image is to be promoted, the reference image and the Y-channel image in each local enhanced image may be extracted for fusion processing.
Alternatively, if the color information of the fused image is to be enhanced, the fusion processing may be performed by extracting the image of the UV channel in each of the locally enhanced images and the reference image.
Optionally, when the reference image and each local enhanced image are fused based on the channel, an edge-expanding fusion mode based on the semantic position can be adopted; for example, in the channel image of the locally enhanced image, assuming that the image area where the semantic A is located is an area 1, an area 2 can be obtained by expanding the image area outwards on the basis of the area 1, and fusion processing is performed on the area 2 and the reference image; wherein region 2 comprises region 1, and the area of region 2 is greater than the area of region 1.
Step S817, obtaining a fusion image.
For example, the fused image may refer to an image obtained by fusing the locally enhanced image with the reference image.
It should be understood that the above steps S801 to S817 are exemplified by including three semantic information in the image; the amount of semantic information in the image is not limited in any way; the number of multiple ISP path branches may be dynamically adjusted based on the number of semantics in the Raw image.
In the embodiment of the application, ISP parameters adapted to different semantic information can be obtained based on the semantic information in the Raw image; ISP processing is carried out based on ISP parameters corresponding to different semantic information, so that ISP processing can be carried out aiming at different semantic areas in the image, and a locally enhanced image is obtained; carrying out fusion processing on the locally enhanced image and the reference image to obtain a fusion image; compared with the prior art that ISP processing is performed by adopting a group of general ISP parameters, the image processing method of the application does not have the problem that the local image area in the image is sacrificed because the global image is required to be balanced through the group of general parameters; by the image processing method, details and/or definition of different semantic areas in the fusion image can be improved, so that image quality of the fusion image is improved.
In addition, the image processing method of the application realizes image enhancement based on different semantic information in ISP path processing; compared with the method that an additional algorithm is added for image enhancement after ISP access processing, the image processing method provided by the application has lower performance requirements on the electronic equipment, and can save the power consumption of the electronic equipment to a certain extent.
Fig. 10 is an effect schematic diagram of an image processing method provided in an embodiment of the present application.
As shown in fig. 10, fig. 10 (a) is a preview image obtained by a conventional image processing method; fig. 10 (b) is a preview image obtained by the image processing method provided by the embodiment of the present application; the sharpness of detail in the preview image shown in fig. 10 (a) is too high, and hairline areas appear distorted and glabrous; for example, the area 920 shown in (b) of fig. 10 has a reduced sharpness in the hairline area compared to the area 910 shown in (a) of fig. 10, making the hairline area more natural; the region 940 shown in (b) of fig. 10 has a degree of realism (e.g., naturalness) superior to the region 930 shown in (a) of fig. 10; therefore, when the image processing method provided by the embodiment of the application is used for processing the image, compared with the existing scheme, the naturalness of the image can be improved, and the image quality is improved.
The image processing method provided in the embodiment of the present application is described in detail above with reference to fig. 1 to 10; embodiments of the device of the present application will be described in detail below with reference to fig. 11 and 12. It should be understood that the apparatus in the embodiments of the present application may perform the methods in the embodiments of the present application, that is, specific working procedures of the following various products may refer to corresponding procedures in the embodiments of the methods.
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 1000 includes a processing module 1010 and an acquisition module 1020.
Wherein the processing module 1010 is configured to turn on a camera application in the electronic device; the acquiring module 1020 is configured to acquire a first image, where the first image is an image in a first color space; the processing module 1010 is configured to perform a first image processing on the first image to obtain a second image, where the second image is an image in a second color space; carrying out semantic segmentation processing on the first image to obtain a semantic segmentation result, wherein the semantic segmentation result is used for indicating semantic information in the first image; invoking a target parameter based on the semantic segmentation result, wherein the target parameter corresponds to the semantic segmentation result; performing second image processing on the first image based on the target parameters to obtain a target image; and carrying out fusion processing on the target image and the second image to obtain a fusion image, wherein the image quality of the fusion image is better than that of the second image.
Optionally, as an embodiment, the fusing the target image and the second image to obtain a fused image includes:
and carrying out fusion processing on the target image and the second image based on the semantic segmentation result to obtain the fusion image.
Optionally, as an embodiment, the processing module 1020 is specifically configured to:
determining a first image region in the target image based on the semantic segmentation result;
determining a second image area based on the first image area, wherein the second image area comprises the image content of the first image area, and the area of the second image area is larger than that of the first image area;
and carrying out fusion processing on the second image area and the second image to obtain the fusion image.
Optionally, as an embodiment, the processing module 1020 is specifically configured to:
and carrying out up-sampling processing on the first image area to obtain the second image area.
Optionally, as an embodiment, the second image area is a circular image area.
Optionally, as an embodiment, the processing module 1020 is specifically configured to:
And carrying out fusion processing on the image of the first channel of the target image and the image of the first channel of the second image to obtain the fusion image.
Optionally, as an embodiment, the first channel is a Y channel, or the first channel is a UV channel.
Optionally, as an embodiment, the semantic segmentation result includes at least two labels, where the at least two labels include a first label and a second label, the first label is used to indicate semantic information of a third image area in the first image, the second label is used to indicate semantic information of a fourth image area in the first image, the target parameter includes a first parameter and a second parameter, the first parameter corresponds to the first label, the second parameter corresponds to the second label, the target image includes a first target image and a second target image, and the processing module 1020 is specifically configured to:
performing the second image processing on the first image based on the first parameter to obtain the first target image;
and carrying out second image processing on the first image based on the second parameter to obtain the second target image.
The electronic device 1000 is embodied as a functional module. The term "module" herein may be implemented in software and/or hardware, and is not specifically limited thereto.
For example, a "module" may be a software program, a hardware circuit, or a combination of both that implements the functionality described above. The hardware circuitry may include application specific integrated circuits (application specific integrated circuit, ASICs), electronic circuits, processors (e.g., shared, proprietary, or group processors, etc.) and memory for executing one or more software or firmware programs, merged logic circuits, and/or other suitable components that support the described functions.
Thus, the elements of the examples described in the embodiments of the present application can be implemented in electronic hardware, or in a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Fig. 12 shows a schematic structural diagram of an electronic device provided in the present application. The dashed line in fig. 12 indicates that the unit or the module is optional; the electronic device 1100 may be used to implement the image processing method described in the method embodiments described above.
Illustratively, the electronic device 1100 includes one or more processors 1101, the one or more processors 1101 being operable to support the electronic device 1100 in implementing the image processing method in the method embodiments. The processor 1101 may be a general purpose processor or a special purpose processor. For example, the processor 1101 may be a central processing unit (central processing unit, CPU), digital signal processor (digital signal processor, DSP), application specific integrated circuit (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA), or other programmable logic device such as discrete gates, transistor logic, or discrete hardware components.
The processor 1101 may be used to control the electronic device 1100, execute software programs, and process data of the software programs, for example. The electronic device 1100 may also include a communication unit 1105 to enable input (reception) and output (transmission) of signals.
For example, the electronic device 1100 may be a chip, the communication unit 1105 may be an input and/or output circuit of the chip, or the communication unit 1105 may be a communication interface of the chip, which may be an integral part of a terminal device or other electronic device.
For another example, the electronic device 1100 may be a terminal device, the communication unit 1105 may be a transceiver of the terminal device, or the communication unit 1105 may be a transceiver circuit of the terminal device.
For example, the electronic device 1100 may include one or more memories 1102, on which a program 1104 is stored, the program 1104 being executable by the processor 1101 to generate instructions 1103, such that the processor 1101 performs the image processing method described in the above method embodiments according to the instructions 1103.
Optionally, the memory 11002 may also store data.
Optionally, the processor 1101 may also read data stored in the memory 1102, which may be stored at the same memory address as the program 1104, or which may be stored at a different memory address than the program 1104.
The processor 1101 and the memory 1102 may be provided separately or may be integrated together, for example, on a System On Chip (SOC) of the terminal device.
Illustratively, the memory 1102 may be used to store a related program 1104 of the image processing method provided in the embodiment of the present application, and the processor 1101 may be used to call the related program 1104 of the image processing method stored in the memory 1102 when performing image processing, to perform the image processing method of the embodiment of the present application; for example, a camera application in an electronic device is started; acquiring a first image, wherein the first image is an image of a first color space; performing first image processing on the first image to obtain a second image, wherein the second image is an image of a second color space; carrying out semantic segmentation processing on the first image to obtain a semantic segmentation result, wherein the semantic segmentation result is used for indicating semantic information in the first image; invoking target parameters based on the semantic segmentation result, wherein the target parameters correspond to the semantic segmentation result; performing second image processing on the first image based on the target parameters to obtain a target image; and carrying out fusion processing on the target image and the second image to obtain a fusion image, wherein the image quality of the fusion image is better than that of the second image.
The present application also provides a computer program product which, when executed by the processor 1101, implements the image processing method of any of the method embodiments of the present application.
The computer program product may be stored in the memory 1102, for example, the program 1104, and the program 1104 may be finally converted into an executable object file capable of being executed by the processor 1101 through preprocessing, compiling, assembling, and linking.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a computer, implements the image processing method according to any of the method embodiments of the present application. The computer program may be a high-level language program or an executable object program.
By way of example, the computer-readable storage medium may be the memory 1102. The memory 1102 may be volatile memory or nonvolatile memory, or the memory 1102 may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the above-described embodiments of the electronic device are merely illustrative, e.g., the division of the modules is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
It should be understood that, in various embodiments of the present application, the size of the sequence number of each process does not mean that the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
In addition, the term "and/or" herein is merely an association relation describing an association object, and means that three kinds of relations may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application should be defined by the claims, and the above description is only a preferred embodiment of the technical solution of the present application, and is not intended to limit the protection scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (11)

1. An image processing method, applied to an electronic device, comprising:
starting a camera application program in the electronic equipment;
acquiring a first image, wherein the first image is an image of a first color space;
performing first image processing on the first image to obtain a second image, wherein the second image is an image in a second color space;
carrying out semantic segmentation processing on the first image to obtain a semantic segmentation result, wherein the semantic segmentation result is used for indicating semantic information in the first image;
invoking a target parameter based on the semantic segmentation result, wherein the target parameter corresponds to the semantic segmentation result;
performing second image processing on the first image based on the target parameters to obtain a target image;
and carrying out fusion processing on the target image and the second image to obtain a fusion image, wherein the image quality of the fusion image is better than that of the second image.
2. The image processing method according to claim 1, wherein the fusing the target image and the second image to obtain a fused image includes:
and carrying out fusion processing on the target image and the second image based on the semantic segmentation result to obtain the fusion image.
3. The image processing method according to claim 2, wherein the fusing the target image and the second image based on the semantic segmentation result to obtain the fused image includes:
determining a first image region in the target image based on the semantic segmentation result;
determining a second image area based on the first image area, wherein the second image area comprises the image content of the first image area, and the area of the second image area is larger than that of the first image area;
and carrying out fusion processing on the second image area and the second image to obtain the fusion image.
4. The image processing method of claim 3, wherein the determining a second image region based on the first image region comprises:
and carrying out up-sampling processing on the first image area to obtain the second image area.
5. The image processing method of claim 3, wherein the second image area is a circular image area.
6. The image processing method according to any one of claims 1 to 5, wherein the fusing the target image and the second image to obtain a fused image includes:
And carrying out fusion processing on the image of the first channel of the target image and the image of the first channel of the second image to obtain the fusion image.
7. The image processing method of claim 6, wherein the first channel is a Y channel or the first channel is a UV channel.
8. The image processing method according to any one of claims 1 to 7, wherein the semantic segmentation result includes at least two labels, the at least two labels including a first label for indicating semantic information of a third image area in the first image and a second label for indicating semantic information of a fourth image area in the first image, the target parameter includes a first parameter and a second parameter, the first parameter corresponds to the first label, the second parameter corresponds to the second label, the target image includes a first target image and a second target image, and the second image is processed based on the target parameter, to obtain a target image, including:
performing the second image processing on the first image based on the first parameter to obtain the first target image;
And carrying out second image processing on the first image based on the second parameter to obtain the second target image.
9. A chip system for application to an electronic device, the chip system comprising one or more processors for invoking computer instructions to cause the electronic device to perform the image processing method of any of claims 1 to 8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to execute the image processing method according to any one of claims 1 to 8.
11. A computer program product, characterized in that the computer program product comprises computer program code which, when executed by a processor, causes the processor to perform the image processing method of any of claims 1 to 8.
CN202210588320.8A 2022-05-27 2022-05-27 Image processing method and electronic equipment Pending CN116029951A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210588320.8A CN116029951A (en) 2022-05-27 2022-05-27 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210588320.8A CN116029951A (en) 2022-05-27 2022-05-27 Image processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN116029951A true CN116029951A (en) 2023-04-28

Family

ID=86074913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210588320.8A Pending CN116029951A (en) 2022-05-27 2022-05-27 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116029951A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017189A (en) * 2020-10-26 2020-12-01 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium
CN112184886A (en) * 2020-09-28 2021-01-05 北京乐学帮网络技术有限公司 Image processing method and device, computer equipment and storage medium
CN113395440A (en) * 2020-03-13 2021-09-14 华为技术有限公司 Image processing method and electronic equipment
CN113538227A (en) * 2020-04-20 2021-10-22 华为技术有限公司 Image processing method based on semantic segmentation and related equipment
CN113689373A (en) * 2021-10-21 2021-11-23 深圳市慧鲤科技有限公司 Image processing method, device, equipment and computer readable storage medium
US20210366127A1 (en) * 2019-05-07 2021-11-25 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus, computer-readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210366127A1 (en) * 2019-05-07 2021-11-25 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus, computer-readable storage medium
CN113395440A (en) * 2020-03-13 2021-09-14 华为技术有限公司 Image processing method and electronic equipment
CN113538227A (en) * 2020-04-20 2021-10-22 华为技术有限公司 Image processing method based on semantic segmentation and related equipment
CN112184886A (en) * 2020-09-28 2021-01-05 北京乐学帮网络技术有限公司 Image processing method and device, computer equipment and storage medium
CN112017189A (en) * 2020-10-26 2020-12-01 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium
CN113689373A (en) * 2021-10-21 2021-11-23 深圳市慧鲤科技有限公司 Image processing method, device, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
US9692959B2 (en) Image processing apparatus and method
CN115686407B (en) Display method and electronic equipment
CN115550570B (en) Image processing method and electronic equipment
CN115701125B (en) Image anti-shake method and electronic equipment
US20240119566A1 (en) Image processing method and apparatus, and electronic device
CN116668862B (en) Image processing method and electronic equipment
CN116055895B (en) Image processing method and device, chip system and storage medium
CN115767290B (en) Image processing method and electronic device
CN115908120B (en) Image processing method and electronic device
CN116437198B (en) Image processing method and electronic equipment
CN116029951A (en) Image processing method and electronic equipment
CN116128739A (en) Training method of downsampling model, image processing method and device
CN116437222A (en) Image processing method and electronic equipment
CN115988311A (en) Image processing method and electronic equipment
WO2021154807A1 (en) Sensor prioritization for composite image capture
CN116029914B (en) Image processing method and electronic equipment
CN114708289A (en) Image frame prediction method and electronic equipment
CN115955611B (en) Image processing method and electronic equipment
CN115767287B (en) Image processing method and electronic equipment
CN116668838B (en) Image processing method and electronic equipment
CN116939363B (en) Image processing method and electronic equipment
CN115150542B (en) Video anti-shake method and related equipment
CN115802144B (en) Video shooting method and related equipment
CN116664630B (en) Image processing method and electronic equipment
CN116055871B (en) Video processing method and related equipment thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination