CN113409205B - Image processing method, image processing device, storage medium and electronic apparatus - Google Patents
Image processing method, image processing device, storage medium and electronic apparatus Download PDFInfo
- Publication number
- CN113409205B CN113409205B CN202110649896.6A CN202110649896A CN113409205B CN 113409205 B CN113409205 B CN 113409205B CN 202110649896 A CN202110649896 A CN 202110649896A CN 113409205 B CN113409205 B CN 113409205B
- Authority
- CN
- China
- Prior art keywords
- image
- color
- color channel
- light damage
- channel value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 24
- 230000008832 photodamage Effects 0.000 claims abstract description 152
- 230000009467 reduction Effects 0.000 claims abstract description 32
- 238000000034 method Methods 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 20
- 238000010586 diagram Methods 0.000 description 14
- 238000004891 communication Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 7
- 238000007726 management method Methods 0.000 description 6
- 241000607479 Yersinia pestis Species 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- QSHDDOUJBYECFT-UHFFFAOYSA-N mercury Chemical compound [Hg] QSHDDOUJBYECFT-UHFFFAOYSA-N 0.000 description 3
- 229910052753 mercury Inorganic materials 0.000 description 3
- 238000010295 mobile communication Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- DGAQECJNVWCQMB-PUAWFVPOSA-M Ilexoside XXIX Chemical compound C[C@@H]1CC[C@@]2(CC[C@@]3(C(=CC[C@H]4[C@]3(CC[C@@H]5[C@@]4(CC[C@@H](C5(C)C)OS(=O)(=O)[O-])C)C)[C@@H]2[C@]1(C)O)C)C(=O)O[C@H]6[C@@H]([C@H]([C@@H]([C@H](O6)CO)O)O)O.[Na+] DGAQECJNVWCQMB-PUAWFVPOSA-M 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 229910052708 sodium Inorganic materials 0.000 description 1
- 239000011734 sodium Substances 0.000 description 1
- 238000004383 yellowing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Processing (AREA)
Abstract
The disclosure provides an image processing method, an image processing device, a computer readable storage medium and electronic equipment, and relates to the technical field of image processing. The image processing method comprises the following steps: acquiring a first image acquired by a camera provided with a light damage resistance filter and a second image acquired by a camera not provided with the light damage resistance filter, wherein the first image and the second image are images acquired for the same scene; and performing color adjustment processing on the first image according to the second image to obtain a light damage reduction image corresponding to the first image, or performing color adjustment processing on the second image according to the first image to obtain a light damage resistant image corresponding to the second image. The color adjustment can be carried out on the basis of the first image collected by the light damage resistant filter and the second image collected by the non-light damage resistant filter, so that the image meeting the actual demand is obtained.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer readable storage medium, and an electronic device.
Background
At present, photographing has become an indispensable part in daily life and work of people, and along with the increasing requirement of people on photographing, various diversified camera devices are appeared in terminal equipment in order to meet different requirements, for example, a plurality of cameras such as long focus, wide angle, super wide angle can be simultaneously equipped in a smart phone. And whatever camera is used, the quality of the acquired image has a great influence on the ambient light. Under the influence of urban light pollution, when a user shoots sky in a dark shooting environment, such as a night scene environment, the shot image often has the conditions of yellowing or impermeability of the sky, and the like, so that the user's look and feel is influenced.
In the prior art, in order to improve the quality of the photographed image, a professional photographer generally uses a light-proof filter to filter out the light emitted from an artificial light source, such as a sodium lamp or a mercury lamp. However, such a filter often has higher cost, and in practical application, not all shooting scenes need to be used, and a user needs to install and detach the light damage resistant filter according to shooting requirements, so that the operation complexity is higher, flexible adjustment is difficult to be performed according to the requirements of the user on light damage resistant shooting, and the use is very inconvenient.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device, thereby solving, at least to some extent, the problem of lack of determination of a photodamage-resistant image in a simple, low-cost manner in the prior art.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of the present disclosure, there is provided an image processing method including: acquiring a first image acquired by a camera provided with a light damage resistance filter and a second image acquired by a camera not provided with the light damage resistance filter, wherein the first image and the second image are images acquired for the same scene; and performing color adjustment processing on the first image according to the second image to obtain a light damage reduction image corresponding to the first image, or performing color adjustment processing on the second image according to the first image to obtain a light damage resistant image corresponding to the second image.
According to a second aspect of the present disclosure, there is provided an image processing apparatus including: the image acquisition module is used for acquiring a first image acquired by a camera provided with a light damage resistance filter and a second image acquired by a camera not provided with the light damage resistance filter, wherein the first image and the second image are images acquired for the same scene; the image processing module is used for carrying out color adjustment processing on the first image according to the second image to obtain a light damage reduction image corresponding to the first image, or carrying out color adjustment processing on the second image according to the first image to obtain a light damage resistant image corresponding to the second image.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method of the first aspect described above and possible implementations thereof.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and the memory is used for storing executable instructions of the processor. Wherein the processor is configured to perform the image processing method of the first aspect and possible implementations thereof via execution of the executable instructions.
The technical scheme of the present disclosure has the following beneficial effects:
acquiring a first image acquired by a camera provided with a light damage resistance filter and a second image acquired by a camera not provided with the light damage resistance filter, wherein the first image and the second image are images acquired for the same scene; and carrying out color adjustment processing on the first image according to the second image to obtain a light damage reduction image corresponding to the first image, or carrying out color adjustment processing on the second image according to the first image to obtain a light damage resistant image corresponding to the second image. On the one hand, the camera provided with the light damage resistant filter and the camera not provided with the light damage resistant filter are matched to collect images with different light damage resistant effects, and the first image or the second image is processed according to actual image requirements, so that a light damage reduction image or a light damage resistant image can be simply and effectively obtained, and the actual requirements of different scenes can be flexibly met; on the other hand, the second image is subjected to color adjustment according to the first image, so that the obtained anti-photodamage image not only has the anti-photodamage effect, but also retains the advantages of the second image, and the first image is subjected to color adjustment according to the second image, so that the anti-photodamage effect in the first image can be simply and effectively removed; in still another aspect, when the light damage reduction image or the light damage resistant image is acquired, the installation and the disassembly of the light damage resistant filter are not required for the camera, the image acquisition process is simple, and convenient use experience can be provided for a user.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
Fig. 1 shows a schematic diagram of a system architecture in the present exemplary embodiment;
fig. 2 shows a structural diagram of an electronic device in the present exemplary embodiment;
fig. 3 shows a flowchart of an image processing method in the present exemplary embodiment;
fig. 4 shows a schematic diagram of a mobile terminal back camera arrangement in the present exemplary embodiment;
fig. 5 shows a schematic diagram of another mobile terminal back camera arrangement in the present exemplary embodiment;
fig. 6 is a schematic view showing an arrangement of a light damage resistant filter in the present exemplary embodiment;
fig. 7 shows a schematic diagram of still another mobile terminal back camera arrangement in the present exemplary embodiment;
Fig. 8 shows a schematic diagram of the first image and the second image before alignment in the present exemplary embodiment;
fig. 9 shows a schematic diagram of the first image and the second image after alignment in the present exemplary embodiment;
fig. 10 shows a flowchart of another image processing method in the present exemplary embodiment;
fig. 11 shows a sub-flowchart of an image processing method in the present exemplary embodiment;
fig. 12 shows a sub-flowchart of another image processing method in the present exemplary embodiment;
fig. 13 shows a sub-flowchart of still another image processing method in the present exemplary embodiment;
fig. 14 shows schematic diagrams of high saturation regions of the first image and the second image in the present exemplary embodiment;
fig. 15 shows a schematic diagram of a union of high saturation regions of a first image and a second image in the present exemplary embodiment;
fig. 16 shows a schematic diagram of a first region of interest and a second region of interest determined in the present exemplary embodiment;
fig. 17 shows a schematic diagram of another first region of interest and a second region of interest determined in the present exemplary embodiment;
fig. 18 is a flowchart showing still another image processing method in the present exemplary embodiment;
Fig. 19 shows a configuration diagram of an image processing apparatus in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. However, those skilled in the art will recognize that the aspects of the present disclosure may be practiced with one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
Exemplary embodiments of the present disclosure provide an image processing method. Fig. 1 shows a system architecture diagram of an operating environment of the present exemplary embodiment. As shown in fig. 1, the system architecture 100 may include a server 110 and a terminal 120, which form a communication interaction through a network, for example, the server 110 transmits a processed image to the terminal 120, and the terminal 120 displays a corresponding image in a display screen. Wherein the server 110 refers to a background server providing an image processing service; the terminal 120 refers to a terminal device including, but not limited to, a smart phone, a tablet computer, a game machine, a wearable device, etc., equipped with a camera of a light damage resistant filter and a camera of a non-light damage resistant filter.
It should be understood that the number of devices in fig. 1 is merely exemplary. Any number of clients may be set, or the server may be a cluster formed by a plurality of servers, according to implementation requirements.
The image processing method provided in the embodiments of the present disclosure may be executed by the server 110, for example, after the terminal 120 collects the first image and the second image, upload the first image and the second image to the server 110, and after the image processing is performed by the imager 110, return the light damage reduction image or the light damage resistant image to the terminal 120 for display; the method may also be performed by the terminal 120, for example, the terminal 120 directly performs image processing after collecting the first image and the second image, to obtain a light damage reduction image or a light damage resistant image. The present disclosure is not limited in this regard.
Exemplary embodiments of the present disclosure provide an electronic device for implementing an image processing method, which may be the server 110 or the terminal 120 in fig. 1. The electronic device comprises at least a processor and a memory for storing executable instructions of the processor, the processor being configured to perform the image processing method via execution of the executable instructions.
The configuration of the above-described electronic device will be exemplarily described below taking the mobile terminal 200 in fig. 2 as an example. It will be appreciated by those skilled in the art that the configuration of fig. 2 can also be applied to stationary type devices in addition to components specifically for mobile purposes.
As shown in fig. 2, the mobile terminal 200 may specifically include: processor 210, internal memory 221, external memory interface 222, USB (Universal Serial Bus ) interface 230, charge management module 240, power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, headset interface 274, sensor module 280, display screen 290, camera module 291, indicator 292, motor 293, keys 294, and SIM (Subscriber Identification Module, subscriber identity module) card interface 295, and the like.
Processor 210 may include one or more processing units such as, for example: the processor 210 may include an AP (Application Processor ), modem processor, GPU (Graphics Processing Unit, graphics processor), ISP (Image Signal Processor ), controller, encoder, decoder, DSP (Digital Signal Processor ), baseband processor and/or NPU (Neural-Network Processing Unit, neural network processor), and the like. An encoder may encode (i.e., compress) image or video data; the decoder may decode (i.e., decompress) the code stream data of the image or video to restore the image or video data.
In some embodiments, processor 210 may include one or more interfaces through which connections are made with other components of mobile terminal 200.
Internal memory 221 may be used to store computer executable program code that includes instructions. The internal memory 221 may include a volatile memory, a nonvolatile memory, and the like. The processor 210 performs various functional applications of the mobile terminal 200 and data processing by executing instructions stored in the internal memory 221 and/or instructions stored in a memory provided in the processor.
The external memory interface 222 may be used to connect an external memory, such as a Micro SD card, to enable expansion of the memory capabilities of the mobile terminal 200. The external memory communicates with the processor 210 through the external memory interface 222 to implement data storage functions, such as storing files of music, video, etc.
The USB interface 230 is an interface conforming to the USB standard specification, and may be used to connect a charger to charge the mobile terminal 200, or may be connected to a headset or other electronic device.
The charge management module 240 is configured to receive a charge input from a charger. The charging management module 240 may also supply power to the device through the power management module 241 while charging the battery 242; the power management module 241 may also monitor the status of the battery.
The wireless communication function of the mobile terminal 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 250 may provide a solution including 2G/3G/4G/5G wireless communication applied on the mobile terminal 200. The wireless communication module 260 may provide wireless communication solutions including WLAN (Wireless Local Area Networks, wireless local area network) (e.g., wi-Fi (Wireless Fidelity, wireless fidelity) network), BT (Bluetooth), GNSS (Global Navigation Satellite System ), FM (Frequency Modulation, frequency modulation), NFC (Near Field Communication, short range wireless communication technology), IR (Infrared technology), etc. applied on the mobile terminal 200.
The mobile terminal 200 may implement a display function through a GPU, a display screen 290, an AP, and the like, and display a user interface.
The mobile terminal 200 may implement a photographing function through an ISP, an image capturing module 291, an encoder, a decoder, a GPU, a display screen 290, an AP, etc., and may implement an audio function through an audio module 270, a speaker 271, a receiver 272, a microphone 273, an earphone interface 274, an AP, etc.
The sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyroscope sensor 2803, a barometric pressure sensor 2804, etc. to implement different sensing functions.
The indicator 292 may be an indicator light, which may be used to indicate a state of charge, a change in power, a message indicating a missed call, a notification, etc. The motor 293 may generate vibration cues, may also be used for touch vibration feedback, or the like. The keys 294 include a power on key, a volume key, etc.
The mobile terminal 200 may support one or more SIM card interfaces 295 for interfacing with a SIM card to enable telephony and data communications, among other functions.
Fig. 3 shows an exemplary flow of an image processing method, which may be performed by the server 110 or the terminal 120 described above, including the following steps S310 to S320:
Step S310, a first image collected by a camera provided with a light damage resistant filter and a second image collected by a camera not provided with the light damage resistant filter are obtained, wherein the first image and the second image are images collected for the same scene.
The photodamage generally refers to the problem that city light affects the color of a shot image, for example, light emitted by a high-voltage mercury lamp, a mercury lamp or a street lamp with low color rendering index in a city often causes problems of yellow shot pictures, impure night sky, unobvious stars, opaque pictures and the like. The anti-photodamage filter can weaken the light of the specific wave band emitted by the artificial light source so as to improve the influence of photodamage on the image quality, and the camera is provided with the anti-photodamage filter so as to block the influence of the light of the photodamage emission wave band on the image to a certain extent. The first image is an image with a light damage resistance effect, and the second image is a light damage image or a normal image without the light damage resistance effect.
The first image and the second image are images acquired for the same scene, which means that the first image and the second image have the same shooting area, that is, the shooting objects in the first image and the second image are the same object. The difference is that according to the type (such as a long focus, a wide angle or a super wide angle) or parameters (such as a focal length or a view angle) of the camera used for collecting the images, the size, the pixel number or the position of a shooting object in the images of the first image and the second image can be different, for example, the camera provided with the anti-light-damage filter can be a camera with lower cost, such as a camera formed by a lens with a narrow view angle or a sensor with a low pixel number, the camera not provided with the anti-light-damage filter can be a camera with a long focus, a wide angle or a super wide angle, and the like, the corresponding first image can be an image which is collected by the camera provided with the anti-light-damage filter and does not have a long focus, a wide angle or a super wide angle, and the second image can be an image with a long focus, a wide angle or a super wide angle, and the like; or the camera with the light damage resistance can be a camera with a low pixel level, and the camera without the light damage resistance can be a camera with a high pixel level, so that the pixel number of the first image is lower than that of the second image; or canvas proportions of the images collected by the cameras are different, the size of the first image can be larger than that of the second image, the attribute of the cameras is schematically described, the cameras can be set in a self-defined mode according to actual needs, for example, the cameras provided with the light damage resistant filters can be high-pixel cameras, and the cameras not provided with the light damage resistant filters can be low-pixel cameras; or the size of the second image may be greater than or equal to the size of the first image, etc., which is not particularly limited by the present disclosure.
In this exemplary embodiment, according to actual needs, the setting of the camera with the light and pest resistant filter and the camera without the light and pest resistant filter in the terminal device may be implemented in various manners, specifically, the following examples are expanded to illustrate, and in the first manner, as shown in fig. 4, the back of the mobile terminal may include three cameras without the light and pest resistant filter, that is, the tele camera 410, the main camera 420 and the wide-angle camera 430, through which normal second images may be collected, and a smaller light and pest resistant camera 440 may be further set beside the above-mentioned cameras to specifically collect the first image, so that in order to reduce the cost, the light and pest resistant camera 440 may be a common camera with a low pixel level and with a low cost. In a second manner, as shown in fig. 5, the back of the mobile terminal may include two cameras that are not configured with a light damage resistant filter, including a tele camera 510 and a main camera 520, and used for collecting a second image, and a wide-angle camera 530 that is configured with a light damage resistant filter and used for collecting a first image. In the above-described scheme, the configuration of the light damage prevention filter for the camera, as shown in fig. 6, may be implemented by integrating the light damage prevention filter 610 on the lens 620 in the camera module or on the image sensor under the lens 620.
In a third mode, as shown in fig. 7, the back of the mobile terminal may include three cameras that are not configured with a light-damage-resistant filter and are used for collecting the second image, in addition, a moving member 740 is further disposed beside the wide-angle camera 730, and a light-damage-resistant filter 750 is disposed on the moving member 740, according to actual needs, the moving member 740 can move the light-damage-resistant filter 750 from the B position, i.e. the position shown in fig. 7 (B), to the a position, i.e. the position shown in fig. 7 (a), and can also retract from the a position to the B position, when the light-damage-resistant filter 750 moves to the a position, the wide-angle camera 730 can collect the first image with the light-damage-resistant effect, and when the light-damage-resistant filter 750 retracts to the B position, the wide-angle camera 730 can collect the normal second image, based on this, the image with the light-damage-resistant effect can be obtained, and in addition, when the light-damage-resistant filter 750 needs to obtain the long-focus camera 710 or the corresponding wide-angle camera 720 can collect the image 650, and the normal image can be collected by combining the image with the main camera 720. It should be noted that the movable piece and the light damage resistant filter may also be disposed beside the main camera or the tele camera, which is not particularly limited in the disclosure.
In the present exemplary embodiment, the camera configured with the light damage resistant filter and the camera not configured with the light damage resistant filter may be different cameras, for example, in the schemes shown in fig. 4 in the first mode and fig. 5 in the second mode, two cameras configured with the light damage resistant filter and the camera not configured with the light damage resistant filter are respectively provided to perform image acquisition; in the scheme shown in fig. 7 in the third mode, the filter moving device is provided, so that the same camera can be used as a camera provided with the light damage resistant filter and can also be used as a camera not provided with the light damage resistant filter to collect images, and different schemes can be adopted according to actual needs.
Step S320, performing color adjustment processing on the first image according to the second image to obtain a light damage reduction image corresponding to the first image, or performing color adjustment processing on the second image according to the first image to obtain a light damage resistant image corresponding to the second image.
The photodamage reduction image is a normal image obtained by performing color adjustment processing on a photodamage-resistant image, namely an image without photodamage-resistant effect, wherein the photodamage-resistant image is an image with photodamage-resistant effect obtained by performing color adjustment processing on the normal image, and can solve the problems of yellow picture, impure night sky, insignificant stars and opaque picture, for example, the sky color of the processed photodamage-resistant image is bluish, and the like.
In practical applications, not all scenes need to adopt the light-damage-resistant filter camera to collect images, or need to obtain images with light-damage-resistant effects, for example, in environments with less light damage, such as daytime with sufficient sunlight and no city light, the normal camera can be directly used for shooting images, and if the light-damage-resistant filter camera is used for shooting images, the display effect of the image color can be affected. Or for the existing anti-photodamage image stored in the terminal, the user wants to convert it into a normal image without the anti-photodamage effect. Based on the requirements of the above-described scene, the present exemplary embodiment proposes that different color adjustment strategies may be performed based on image processing of the first image and the second image acquired for the same scene to convert the photodamage-resistant image into a photodamage-reducing image, or to convert the normal image into a photodamage-resistant image. For example, in the scheme shown in fig. 4, the main camera 420 may have a higher pixel, may collect a second image with a high pixel, the camera 440 configured with the anti-light damage filter may have a lower pixel, may collect a first image with a low pixel, and perform color adjustment processing on the collected second image according to the first image, so as to determine an image with a higher pixel and an anti-light damage effect; or in the scheme shown in fig. 5, the normal second image collected by the main camera 520 may be collected by the wide-angle camera 530 configured with the anti-photodamage filter, and the first image may be color-adjusted according to the second image, so that the normal image captured by the wide-angle camera 530 and having only the wide-angle effect but not the anti-photodamage effect may be restored.
Specifically, the present exemplary embodiment may perform color adjustment processing on the first image or the second image according to a color ratio relationship between the photodamage-resistant image and the non-photodamage-resistant image, so as to obtain a photodamage-reduced image or a photodamage-resistant image. The color proportion relation can be calculated through the color channel value of the image.
In addition, in the present exemplary embodiment, the color adjustment processing may be performed according to actual scene requirements, for example, when it is detected that the current scene is a night scene, or when the user actively turns on the light damage resistant mode, if the second image is collected by the camera not configured with the light damage resistant filter, the camera configured with the light damage resistant filter will collect the first image of the current scene at the same time, and then the light damage resistant image corresponding to the second image may be output through image processing; or when the current scene is detected to be a scene with sufficient sunlight in the daytime, the normal image shot by the wide-angle camera is wanted to be acquired, but the wide-angle camera is provided with the light damage resistance filter, so that the normal image can be simultaneously obtained through the common camera without the light damage resistance filter, the image shot by the wide-angle camera can be subjected to image processing through the normal image, and the light damage reduction image corresponding to the image shot by the light talk camera can be obtained. In addition, the terminal equipment can acquire the first image and the second image by default under any scene for storage, so that when a subsequent user has an image conversion requirement on a certain image, the adjustment of the image color can be flexibly and effectively carried out.
In an exemplary embodiment, in order to secure the image processing effect, the image processing method may further include the steps of:
and performing alignment processing on the first image and the second image.
In consideration of different shooting parameters, such as the size of a shot image, or the position or angle of the same shooting object in images acquired under different shooting angles, for example, when two cameras are in different positional relationships from top to bottom on the back of the mobile terminal, the angles of the shot images may have slight differences. In order to avoid the influence of such a difference on the subsequent image processing, the present exemplary embodiment may perform the alignment processing on the first image and the second image after acquiring them.
As shown in fig. 8, the first image 810 is an image collected by a camera equipped with a light damage resistance filter, and the second image 820 is an image collected by a camera not equipped with a light damage resistance filter, and it can be seen that the image sizes of the first image 810 and the second image 820 and the position of the photographic subject 830 in the images are inconsistent. After the alignment process, as shown in fig. 9, the first image 910 and the second image 920 have the same size, and the position of the photographic subject 930 in the images is the same. The present exemplary embodiment can correct a variation caused by an image capturing process through image alignment processing, so that a subsequent image processing process can have a better effect.
Specifically, the present exemplary embodiment may use SIFT (Scale-invariant feature transform, scale invariant feature transform) algorithm to extract feature vectors irrelevant to Scale scaling, rotation, brightness variation from an image, and perform processing procedures such as matching the feature vectors to perform alignment processing on the first image and the second image, and other algorithms capable of performing alignment processing on the first image and the second image should also be within the scope of protection of the present disclosure, and are not specifically limited herein.
Fig. 10 shows a flowchart of another image processing method, which may specifically include the following steps:
step S1010, acquiring a first image acquired by a camera provided with a light damage resistant filter;
step S1020, acquiring a second image acquired by a camera not provided with a light damage resistance filter, wherein the first image and the second image are images acquired for the same scene;
step S1030, performing alignment processing on the first image and the second image;
step S1040, performing color adjustment processing on the first image according to the second image to obtain a light damage reduction image corresponding to the first image;
step S1050, performing color adjustment processing on the second image according to the first image to obtain a light damage resistant image corresponding to the second image.
In summary, in the present exemplary embodiment, a first image acquired by a camera equipped with a light damage resistance filter and a second image acquired by a camera not equipped with a light damage resistance filter are acquired, the first image and the second image being images acquired for the same scene; and carrying out color adjustment processing on the first image according to the second image to obtain a light damage reduction image corresponding to the first image, or carrying out color adjustment processing on the second image according to the first image to obtain a light damage resistant image corresponding to the second image. On the one hand, the camera provided with the light damage resistant filter and the camera not provided with the light damage resistant filter are matched to collect images with different light damage resistant effects, and the first image or the second image is processed according to actual image requirements, so that a light damage reduction image or a light damage resistant image can be simply and effectively obtained, and the actual requirements of different scenes can be flexibly met; on the other hand, the second image is subjected to color adjustment according to the first image, so that the obtained anti-photodamage image not only has the anti-photodamage effect, but also retains the advantages of the second image, and the first image is subjected to color adjustment according to the second image, so that the anti-photodamage effect in the first image can be simply and effectively removed; in still another aspect, when the light damage reduction image or the light damage resistant image is acquired, the installation and the disassembly of the light damage resistant filter are not required for the camera, the image acquisition process is simple, and convenient use experience can be provided for a user.
Specifically, in an exemplary embodiment, as shown in fig. 11, the step S320 may include the following steps:
step S1110, obtaining a first proportional relation among the color channels of the first image and a second proportional relation among the color channels of the second image;
step S1120, determining a color adjustment parameter according to the first proportional relationship and the second proportional relationship;
step S1130, performing color adjustment processing on the first image or the second image by using the color adjustment parameters to obtain a light damage reduction image corresponding to the first image or a light damage resistant image corresponding to the second image.
The proportional relation refers to a proportional relation of channel values among different color channels, wherein the color channels can comprise a plurality of types according to different color modes, for example, a GRB (Red Green Blue) color mode can comprise an R channel, a G channel and a B channel; CMYK (printing color) color modes may include C-channel, M-channel, Y-channel, K-channel; LAB (L brightness, two color channels a and B) color modes can include a channel and B channel, and so on. The present exemplary embodiment may determine a color adjustment parameter according to a proportional relationship between color channel values of images, and perform color adjustment on the first image or the second image.
In consideration of that the values of the same color channel may have differences in different pixel points in the image, the average value of the values of each color channel may be calculated first, for example, the first proportional relationship and the second proportional relationship are determined according to the average value of the R channel value, the average value of the G channel value, and the average value of the B channel value, so as to determine the color adjustment parameters and the like.
In an exemplary embodiment, the first proportional relationship may include: a ratio of the second color channel value to the first color channel value and a ratio of the third color channel value to the first color channel value in the first image; the second proportional relationship includes: a ratio of a second color channel value to a first color channel value and a ratio of a third color channel value to a first color channel value in the second image;
for example, in the RGB color mode, the first color channel value may be a value of a G channel, the second color channel value may be a value of an R channel, and the third color channel value may be a value of a B channel. First proportional relation P 1 May include:andwherein, rave 1 Gave, the average of R channel values in the first image 1 As an average value of the G channel values in the first image, bave 1 Is the average of the B-channel values in the first image. Second proportional relation P 2 May include->And +.>Wherein, rave 2 Gave, the average of R channel values in the second image 2 As an average of the G channel values in the second image, bave 2 Is the average of the B-channel values in the first image.
In an exemplary embodiment, as shown in fig. 12, the step S1120 may include the following steps:
step S1210, determining a color adjustment parameter of the second color channel according to the ratio of the second color channel value to the first color channel value in the first image and the ratio of the second color channel value to the first color channel value in the second image;
step S1220, determining a color adjustment parameter of the third color channel according to the ratio of the third color channel value to the first color channel value in the first image and the ratio of the third color channel value to the first color channel value in the second image;
correspondingly, step S1130 may include:
step S1230, adjusting the second color channel of the image to be adjusted by using the color adjustment parameters of the second color channel, and adjusting the third color channel of the image to be adjusted by using the color adjustment parameters of the third color channel; the image to be adjusted is a first image or a second image.
In this exemplary embodiment, the color adjustment parameter of the second color channel may be determined by the ratio of the second color channel value to the first color channel value in the first image and the ratio of the second color channel value to the first color channel value in the second image, for example, the color adjustment parameter Kgr of the second color channel may be expressed as:wherein,
the color parameter of the third color channel may be determined by the ratio of the third color channel value to the first color channel value in the first image and the ratio of the third color channel value to the first color channel value in the second image, e.g., the color parameter Kgb of the third color channel may be expressed as:wherein (1)>
In the present exemplary embodiment, for different color adjustment processing requirements, the above-described color adjustment parameters may be used to calculate different images to determine a photodamage reduction image or a photodamage resistance image.
In particular, when performing color adjustment on a first image, a second image may be calculated by the following formula to determine a first imageAnd (3) imaging a corresponding light damage reduction image: g 2 ’=G 2 ,Wherein G is 2 、R 2 、B 2 Respectively representing the values of the green, red and blue color channels of each pixel point in the second image, G 2 ’、R 2 ’、B 2 ' represents the values of the green, red, blue color channels of each pixel point in the obtained photodamaged reduction image, respectively.
When the second image is subjected to color adjustment, the first image can be calculated through the following formula to determine a photodamage resistant image corresponding to the second image: g 1 ’=G 1 ,R 1 ’=R 1 *Kgr,B 1 ’=B 1 * Kgb, where G 1 、R 1 、B 1 Respectively representing the values of the green, red and blue color channels of each pixel point in the first image, G 1 ’、R 1 ’、B 1 ' represents the values of the green, red, blue color channels of each pixel point in the obtained photodamage-resistant image, respectively.
In an exemplary embodiment, as shown in fig. 13, the step S1110 may include the steps of:
step S1310, determining a first region of interest in the first image and a second region of interest in the second image;
step S1320, counting pixel values of a first region of interest to obtain a first color channel value, a second color channel value and a third color channel value of a first image;
step S1330, the pixel values of the second region of interest are counted to obtain a first color channel value, a second color channel value, and a third color channel value of the second image.
The region of interest may be any location in the image, or may be a specific region, such as a sky region, an ocean/lake region, or a mountain region. In order to save unnecessary calculation amount and calculate the proportional relation of different color channel values accurately and effectively, the present exemplary embodiment may determine a first region of interest from a first image, determine a second region of interest from a second image, and perform statistics of pixel values in the first region of interest and the second region of interest, respectively, to determine a first color channel value, a second color channel value, and a third color channel value.
It should be noted that the first region of interest and the second region of interest may be the same regions corresponding to the first image and the second image, for example, the sky region in the first image is the first region of interest, and the sky region corresponding to the same position in the second image is the second region of interest.
In addition, in the present exemplary embodiment, when performing the color adjustment processing on the first image or the second image according to the color adjustment parameter, global adjustment may be selected, that is, calculation may be performed on each pixel point in the first image and the second image, local adjustment may also be selected, for example, adjustment may be performed only on the pixel point in the first region of interest or the second region of interest, and the present disclosure is not limited in particular.
In an exemplary embodiment, the step S1310 may include the steps of:
and determining a first region of interest in a region of the first image with saturation below a preset threshold value, and determining a second region of interest in a region of the second image with saturation below the preset threshold value.
It is considered that if the image contains a region with high saturation, the color of the pixel may not be truly accurately represented, for example, in an 8bit (byte) image, the value of the R channel, G channel, or B channel of the high saturation region may be equal to 255. Therefore, in the present exemplary embodiment, a region with higher saturation in an image may be filtered according to a preset threshold, and a region meeting the saturation requirement may be directly used as the first region of interest and the second region of interest; the method can also comprise the steps of firstly determining the region with the saturation lower than the preset threshold value based on the preset threshold value, then further determining the first region of interest from the first image in the region meeting the saturation requirement according to the actual requirement, and determining the second region of interest from the second image. The preset threshold value can be set in a self-defined manner according to needs, and the disclosure is not limited in particular.
As shown in fig. 14, the region 1411 represents a high saturation region in which the saturation is higher than a preset threshold in the first image 1410, and the region 1421 represents a high saturation region in which the saturation is higher than the preset threshold in the second image 1420, and thus, both the high saturation region 1411 and the region other than the high saturation region 1421 may be the region of interest. Considering that the first region of interest and the second region of interest are regions where the same positions are located in a normal case, after determining high saturation regions with saturation higher than a preset threshold in the first image and the second image, the regions of interest may be determined outside the union of the two high saturation regions. For example, from the high saturation region 1410 of the first image and the high saturation region 1420 of the second image shown in fig. 14, it may be determined that the union of the high saturation regions may be represented as the region 1510 shown in fig. 15, and the first region of interest and the second region of interest may be determined from outside the region 1510. For example, as shown in fig. 16, a region 1610 may be taken as a first region of interest in the first image 1410, and a region 1620 corresponding to the region 1610 may be taken as a second region of interest in the second image 1420; as shown in fig. 17, the first image 1410 may include a region 1710 as a first region of interest, a region 1720 corresponding to the region 1710 to the second image 1420 as a second region of interest, and the like, and the determination of the region of interest may include a plurality of types, which are not specifically limited in the present disclosure.
In this exemplary embodiment, two different sets of color adjustment parameters may be calculated, and corresponding color adjustment parameters are used to calculate the first image and the second image respectively to determine the light damage reduction image and the light damage resistant image through different calculation modes, or the same set of color adjustment parameters may be calculated, and the first image and the second image may be calculated to determine the light damage reduction image and the light damage resistant image through forward calculation and reverse calculation.
Specifically, in an exemplary embodiment, the step S1130 may include:
and performing color adjustment processing in a first direction on the first image or performing color adjustment processing in a second direction on the second image by using the color adjustment parameters, wherein the second direction is the reverse direction of the first direction.
For example, the first direction color adjustment of the first image may be G 1 ’=G 1 ,R 1 ’=R 1 *Kgr,B 1 ’=B 1 * Kgb, where G 1 、R 1 、B 1 Respectively representing the values of the green, red and blue color channels of each pixel point in the first image, G 1 ’、R 1 ’、B 1 ' represents the values of the green, red, and blue color channels of each pixel point in the resulting photodamage reduction image, respectively. The second direction, i.e. the reverse direction, of the second image may be: g 2 ’=G 2 ,Wherein G is 2 、R 2 、B 2 Respectively representing the values of the green, red and blue color channels of each pixel point in the second image, G 2 ’、R 2 ’、B 2 ' represents the values of the green, red, and blue color channels of each pixel point in the resulting photodamage reduction image, respectively. That is, the present exemplary embodiment can perform different processing procedures by inverse operation using the same color adjustment parameters Kgr and Kgb, resulting in corresponding images.
Fig. 18 shows a flowchart of yet another image processing method, which may specifically include the steps of:
step S1802, acquiring a first image acquired by a camera equipped with a light damage resistant filter;
step S1804, acquiring a second image acquired by the camera not equipped with the light damage resistant filter;
step S1806, performing alignment processing on the first image and the second image to obtain a first image and a second image after the alignment processing;
step S1808, determining a first region of interest in a region of the first image having a saturation lower than a preset threshold;
step S1810, determining a second region of interest in a region of the second image having a saturation lower than the preset threshold;
step S1812, counting pixel values of the first region of interest and the second region of interest to determine a color adjustment parameter;
Step S1814, performing color adjustment processing in the first direction on the first image using the color adjustment parameters;
step S1816, performing color adjustment processing in the second direction on the second image using the color adjustment parameters;
step S1818, obtaining a light damage reduction image corresponding to the first image;
step S1820, obtaining the anti-photodamage image corresponding to the second image.
Exemplary embodiments of the present disclosure also provide an image processing apparatus. As shown in fig. 19, the image processing apparatus 1900 may include: an image acquisition module 1910, configured to acquire a first image acquired by a camera configured with a light damage resistant filter and a second image acquired by a camera not configured with a light damage resistant filter, where the first image and the second image are images acquired for the same scene; the image processing module 1920 is configured to perform color adjustment processing on the first image according to the second image to obtain a light damage reduction image corresponding to the first image, or perform color adjustment processing on the second image according to the first image to obtain a light damage resistant image corresponding to the second image.
In an exemplary embodiment, the image processing module includes: a proportional relation obtaining unit, configured to obtain a first proportional relation between each color channel of the first image and a second proportional relation between each color channel of the second image; an adjustment parameter determining unit, configured to determine a color adjustment parameter according to the first proportional relationship and the second proportional relationship; and the color adjustment unit is used for carrying out color adjustment processing on the first image or the second image by utilizing the color adjustment parameters to obtain a photodamage reduction image corresponding to the first image or a photodamage resistance image corresponding to the second image.
In an exemplary embodiment, the first proportional relationship comprises: a ratio of the second color channel value to the first color channel value and a ratio of the third color channel value to the first color channel value in the first image; the second proportional relationship includes: a ratio of a second color channel value to a first color channel value and a ratio of a third color channel value to a first color channel value in the second image; the adjustment parameter determination unit includes: a first parameter determining subunit, configured to determine a color adjustment parameter of the second color channel according to a ratio of the second color channel value to the first color channel value in the first image and a ratio of the second color channel value to the first color channel value in the second image; a second parameter determining subunit, configured to determine a color adjustment parameter of the third color channel according to a ratio of the third color channel value to the first color channel value in the first image and a ratio of the third color channel value to the first color channel value in the second image; the color adjustment unit includes: a color channel adjusting subunit, configured to adjust a second color channel of the image to be adjusted by using a color adjustment parameter of the second color channel, and adjust a third color channel of the image to be adjusted by using a color adjustment parameter of the third color channel; the image to be adjusted is a first image or a second image.
In an exemplary embodiment, the proportional relation acquiring unit includes: a region of interest determination subunit configured to determine a first region of interest in the first image and determine a second region of interest in the second image; the first pixel value statistics subunit is used for counting the pixel value of the first region of interest to obtain a first color channel value, a second color channel value and a third color channel value of the first image; and the second pixel value statistics subunit is used for counting the pixel values of the second region of interest to obtain a first color channel value, a second color channel value and a third color channel value of the second image.
In an exemplary embodiment, the region of interest determination subunit comprises: and the region determining subunit is used for determining a first region of interest in a region with the saturation lower than a preset threshold value in the first image and determining a second region of interest in a region with the saturation lower than the preset threshold value in the second image.
In an exemplary embodiment, the color adjustment unit includes: and the color adjustment subunit is used for performing color adjustment processing in a first direction on the first image or performing color adjustment processing in a second direction on the second image by utilizing the color adjustment parameters, wherein the second direction is the reverse direction of the first direction.
In an exemplary embodiment, the image processing apparatus further includes: and the alignment module is used for performing alignment processing on the first image and the second image after acquiring the first image and the second image.
The specific details of each part in the above apparatus are already described in the method part embodiments, and thus will not be repeated.
Exemplary embodiments of the present disclosure also provide a computer readable storage medium, which may be implemented in the form of a program product, comprising program code for causing a terminal device to perform the steps according to the various exemplary embodiments of the present disclosure described in the above section of the "exemplary method" when the program product is run on the terminal device, e.g. any one or more of the steps of fig. 3, 10, 11, 12, 13 or 18 may be performed. The program product may employ a portable compact disc read-only memory (CD-ROM) and comprise program code and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, a random access memory, a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. An image processing method, comprising:
acquiring a first image acquired by a camera provided with a light damage resistance filter and a second image acquired by a camera not provided with the light damage resistance filter, wherein the first image and the second image are images acquired for the same scene;
when the current scene is detected to be the scene for shooting the normal image, and the camera provided with the anti-photodamage filter is used for shooting the first image, performing color adjustment processing on the first image according to the second image to obtain a photodamage reduction image corresponding to the first image; when the current scene is detected to be the scene for shooting the light damage resistant image, and the second image is shot through the camera which is not provided with the light damage resistant filter, performing color adjustment processing on the second image according to the first image, and obtaining the light damage resistant image corresponding to the second image.
2. The method of claim 1, wherein the performing color adjustment processing on the first image according to the second image obtains a light damage reduction image corresponding to the first image;
Or, performing color adjustment processing on the second image according to the first image to obtain a light damage resistant image corresponding to the second image, including:
acquiring a first proportional relation among all color channels of the first image and a second proportional relation among all color channels of the second image;
determining a color adjustment parameter according to the first proportional relationship and the second proportional relationship;
and performing color adjustment processing on the first image or the second image by using the color adjustment parameters to obtain a photodamage reduction image corresponding to the first image or a photodamage resistant image corresponding to the second image.
3. The method of claim 2, wherein the first proportional relationship comprises: a ratio of a second color channel value to a first color channel value and a ratio of a third color channel value to a first color channel value in the first image; the second proportional relationship includes: a ratio of a second color channel value to a first color channel value and a ratio of a third color channel value to a first color channel value in the second image;
the determining the color adjustment parameter according to the first proportional relationship and the second proportional relationship includes:
Determining a color adjustment parameter of a second color channel according to the ratio of the second color channel value to the first color channel value in the first image and the ratio of the second color channel value to the first color channel value in the second image;
determining a color adjustment parameter of a third color channel according to the ratio of the third color channel value to the first color channel value in the first image and the ratio of the third color channel value to the first color channel value in the second image;
the step of performing color adjustment processing on the first image or the second image by using the color adjustment parameters to obtain a light damage reduction image corresponding to the first image or a light damage resistance image corresponding to the second image, includes:
adjusting a second color channel of an image to be adjusted by using the color adjustment parameters of the second color channel, and adjusting a third color channel of the image to be adjusted by using the color adjustment parameters of the third color channel; the image to be adjusted is the first image or the second image.
4. The method of claim 2, wherein the acquiring a first proportional relationship between the color channels of the first image and a second proportional relationship between the color channels of the second image comprises:
Determining a first region of interest in the first image and a second region of interest in the second image;
counting the pixel values of the first region of interest to obtain a first color channel value, a second color channel value and a third color channel value of the first image;
and counting the pixel values of the second region of interest to obtain a first color channel value, a second color channel value and a third color channel value of the second image.
5. The method of claim 4, wherein the determining a first region of interest in the first image and a second region of interest in the second image comprises:
and determining a first region of interest in a region of the first image, the saturation of which is lower than a preset threshold value, and determining a second region of interest in a region of the second image, the saturation of which is lower than the preset threshold value.
6. The method according to claim 2, wherein performing color adjustment processing on the first image or the second image by using the color adjustment parameter to obtain a light damage reduction image corresponding to the first image or a light damage resistance image corresponding to the second image includes:
And performing color adjustment processing in a first direction on the first image or performing color adjustment processing in a second direction on the second image by using the color adjustment parameters, wherein the second direction is the reverse direction of the first direction.
7. The method of claim 1, wherein after acquiring the first image and the second image, the method further comprises:
and performing alignment processing on the first image and the second image.
8. An image processing apparatus, comprising:
the image acquisition module is used for acquiring a first image acquired by a camera provided with a light damage resistance filter and a second image acquired by a camera not provided with the light damage resistance filter, wherein the first image and the second image are images acquired for the same scene;
the image processing module is used for carrying out color adjustment processing on the first image according to the second image when the current scene is detected to be the scene for shooting the normal image and the camera provided with the anti-photodamage filter is used for shooting the first image, so as to obtain a photodamage reduction image corresponding to the first image; when the current scene is detected to be the scene for shooting the light damage resistant image, and the second image is shot through the camera which is not provided with the light damage resistant filter, performing color adjustment processing on the second image according to the first image, and obtaining the light damage resistant image corresponding to the second image.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any of claims 1 to 7.
10. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1 to 7 via execution of the executable instructions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110649896.6A CN113409205B (en) | 2021-06-10 | 2021-06-10 | Image processing method, image processing device, storage medium and electronic apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110649896.6A CN113409205B (en) | 2021-06-10 | 2021-06-10 | Image processing method, image processing device, storage medium and electronic apparatus |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113409205A CN113409205A (en) | 2021-09-17 |
CN113409205B true CN113409205B (en) | 2023-11-24 |
Family
ID=77683631
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110649896.6A Active CN113409205B (en) | 2021-06-10 | 2021-06-10 | Image processing method, image processing device, storage medium and electronic apparatus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113409205B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114577310A (en) * | 2022-01-21 | 2022-06-03 | 北京信息科技大学 | Detection system for micropore plate liquid level |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106454101A (en) * | 2016-10-25 | 2017-02-22 | 宇龙计算机通信科技(深圳)有限公司 | Image processing method and terminal |
WO2017107629A1 (en) * | 2015-12-24 | 2017-06-29 | 努比亚技术有限公司 | Mobile terminal, data transmission system and shooting method of mobile terminal |
CN107534735A (en) * | 2016-03-09 | 2018-01-02 | 华为技术有限公司 | Image processing method, device and the terminal of terminal |
CN108024107A (en) * | 2017-12-06 | 2018-05-11 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer-readable recording medium |
CN110442153A (en) * | 2019-07-10 | 2019-11-12 | 佛山科学技术学院 | A kind of passive optical is dynamic to catch system video cameras Corrective control method and system |
CN110572583A (en) * | 2018-05-18 | 2019-12-13 | 杭州海康威视数字技术股份有限公司 | method for shooting image and camera |
CN110868548A (en) * | 2018-08-27 | 2020-03-06 | 华为技术有限公司 | Image processing method and electronic equipment |
CN110958401A (en) * | 2019-12-16 | 2020-04-03 | 北京迈格威科技有限公司 | Super night scene image color correction method and device and electronic equipment |
CN111275641A (en) * | 2020-01-17 | 2020-06-12 | 深圳市商汤科技有限公司 | Image processing method and device, electronic equipment and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW541186B (en) * | 2000-09-08 | 2003-07-11 | Koninkl Philips Electronics Nv | Method for controlling the alertness of a human subject and a light source for use in this method |
JP4904108B2 (en) * | 2006-07-25 | 2012-03-28 | 富士フイルム株式会社 | Imaging apparatus and image display control method |
US9014473B2 (en) * | 2013-03-15 | 2015-04-21 | Pictech Management Limited | Frame of color space encoded image for distortion correction |
CN106464850B (en) * | 2014-06-24 | 2019-10-11 | 麦克赛尔株式会社 | Image sensor and photographic device |
US10515991B2 (en) * | 2015-04-17 | 2019-12-24 | Taiwan Semiconductor Manufacturing Company Ltd. | Semiconductor structure and manufacturing method thereof |
-
2021
- 2021-06-10 CN CN202110649896.6A patent/CN113409205B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017107629A1 (en) * | 2015-12-24 | 2017-06-29 | 努比亚技术有限公司 | Mobile terminal, data transmission system and shooting method of mobile terminal |
CN107534735A (en) * | 2016-03-09 | 2018-01-02 | 华为技术有限公司 | Image processing method, device and the terminal of terminal |
CN106454101A (en) * | 2016-10-25 | 2017-02-22 | 宇龙计算机通信科技(深圳)有限公司 | Image processing method and terminal |
CN108024107A (en) * | 2017-12-06 | 2018-05-11 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer-readable recording medium |
CN110572583A (en) * | 2018-05-18 | 2019-12-13 | 杭州海康威视数字技术股份有限公司 | method for shooting image and camera |
CN110868548A (en) * | 2018-08-27 | 2020-03-06 | 华为技术有限公司 | Image processing method and electronic equipment |
CN110442153A (en) * | 2019-07-10 | 2019-11-12 | 佛山科学技术学院 | A kind of passive optical is dynamic to catch system video cameras Corrective control method and system |
CN110958401A (en) * | 2019-12-16 | 2020-04-03 | 北京迈格威科技有限公司 | Super night scene image color correction method and device and electronic equipment |
CN111275641A (en) * | 2020-01-17 | 2020-06-12 | 深圳市商汤科技有限公司 | Image processing method and device, electronic equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
Mobile-based Luminance Measurement;Junggyeol Jin et al.;《2015 International Conference on Computer Application Technologies》;20150902;第148页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113409205A (en) | 2021-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022262260A1 (en) | Photographing method and electronic device | |
CN112150399B (en) | Image enhancement method based on wide dynamic range and electronic equipment | |
CN111179282B (en) | Image processing method, image processing device, storage medium and electronic apparatus | |
CN113747028B (en) | Shooting method and electronic equipment | |
CN112954251B (en) | Video processing method, video processing device, storage medium and electronic equipment | |
KR20080044482A (en) | System for inputting position information in image and method thereof | |
CN102075683A (en) | Digital image processing apparatus and photographing method of digital image processing apparatus | |
JP2002101226A (en) | Imaging device, imaging method, memory medium, communication apparatus and method and memory medium | |
CN111696039B (en) | Image processing method and device, storage medium and electronic equipment | |
KR20090053479A (en) | System for inputting position information in captured image and method thereof | |
WO2024027287A9 (en) | Image processing system and method, and computer-readable medium and electronic device | |
CN111161176B (en) | Image processing method and device, storage medium and electronic equipment | |
KR20080113698A (en) | System for inputting position information in captured image and method thereof | |
CN113096022B (en) | Image blurring processing method and device, storage medium and electronic device | |
CN115526787A (en) | Video processing method and device | |
CN111835973A (en) | Shooting method, shooting device, storage medium and mobile terminal | |
CN113409205B (en) | Image processing method, image processing device, storage medium and electronic apparatus | |
US20080018743A1 (en) | Image data processing system and method thereof | |
CN102238394B (en) | Image processing apparatus, control method thereof, and image-capturing apparatus | |
WO2024067071A1 (en) | Photographing method, and electronic device and medium | |
US20230308530A1 (en) | Data Transmission Method and Electronic Device | |
JP2014103643A (en) | Imaging device and subject recognition method | |
CN115529411B (en) | Video blurring method and device | |
CN113364964B (en) | Image processing method, image processing apparatus, storage medium, and terminal device | |
CN111294905B (en) | Image processing method, image processing device, storage medium and electronic apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |