CN110971841B - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN110971841B
CN110971841B CN201911252972.9A CN201911252972A CN110971841B CN 110971841 B CN110971841 B CN 110971841B CN 201911252972 A CN201911252972 A CN 201911252972A CN 110971841 B CN110971841 B CN 110971841B
Authority
CN
China
Prior art keywords
sub
image
region
regions
high dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911252972.9A
Other languages
Chinese (zh)
Other versions
CN110971841A (en
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911252972.9A priority Critical patent/CN110971841B/en
Publication of CN110971841A publication Critical patent/CN110971841A/en
Application granted granted Critical
Publication of CN110971841B publication Critical patent/CN110971841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Abstract

The embodiment of the application discloses an image processing method, an image processing device, a storage medium and electronic equipment, wherein a plurality of images with different exposure parameters are obtained, then each image is subjected to self-adaptive segmentation, each image is segmented into a plurality of sub-regions, then the sub-regions with the same image content are combined into a sub-region set to obtain a plurality of sub-region sets, then each sub-region set is aligned respectively, high dynamic range synthesis is carried out on each sub-region set to obtain a corresponding high dynamic synthesis sub-region, and finally a high dynamic synthesis image of the plurality of images is generated according to the high dynamic synthesis sub-regions corresponding to the plurality of sub-region sets. That is, in the embodiment of the present application, the electronic device performs local area alignment on the image, so as to eliminate "ghost image", which enables the quality of high dynamic range synthesis to be improved when performing high dynamic range synthesis.

Description

Image processing method, image processing device, storage medium and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a storage medium, and an electronic device.
Background
At present, people's lives have not left electronic devices such as smart phones and tablet computers, and various rich functions provided by the electronic devices enable people to entertain, work and the like anytime and anywhere. For example, the user may use the electronic device to take a picture, such as taking an image or recording a video. Therefore, the electronic apparatus is often required to perform various image processing operations. For example, when performing high dynamic range synthesis, the electronic device may acquire a plurality of images in the same shooting scene, align the acquired plurality of images, and then perform high dynamic range synthesis. However, in the related art, when image alignment is performed, the image alignment effect is still poor, and the quality of high dynamic range synthesis is affected.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a storage medium and an electronic device, which can improve the quality of high dynamic range synthesis.
The embodiment of the application provides an image processing method, which is applied to electronic equipment and comprises the following steps:
acquiring a plurality of images with different exposure parameters;
respectively carrying out self-adaptive segmentation on each image, and segmenting each image into a plurality of sub-regions;
combining the subregions with the same image content into a subregion set to obtain a plurality of subregion sets;
respectively aligning each sub-region set, and carrying out high dynamic range synthesis on each sub-region set to obtain corresponding high dynamic synthesis sub-regions;
and generating a high-dynamic synthetic image of the plurality of images according to the high-dynamic synthetic subareas corresponding to the plurality of subarea sets.
The image processing apparatus provided in the embodiment of the present application is applied to an electronic device, and includes:
the image acquisition module is used for acquiring a plurality of images with different exposure parameters and determining a reference image from the plurality of images;
an image segmentation module, configured to adaptively segment the reference image and other images of the plurality of images, segment the reference image into a plurality of first sub-regions, and segment the other images into a plurality of second sub-regions;
the area combination module is used for combining a second subarea and a first subarea with the same image content into a subarea set to obtain a plurality of subarea sets;
the alignment synthesis module is used for respectively aligning each sub-region set and carrying out high dynamic range synthesis on each sub-region set to obtain a corresponding high dynamic synthesis sub-region;
and the image generation module is used for generating a high-dynamic synthetic image of the reference image and the other images according to the high-dynamic synthetic subareas corresponding to the plurality of subarea sets.
A storage medium provided in an embodiment of the present application has a computer program stored thereon, and when the computer program is loaded by a processor, the storage medium executes an image processing method provided in any embodiment of the present application.
The electronic device provided in the embodiment of the present application includes a processor and a memory, where the memory stores a computer program, and the processor is configured to execute the image processing method provided in any embodiment of the present application by loading the computer program.
According to the method and the device, the local areas of the images are aligned, so that ghost images are eliminated, and the quality of high dynamic range synthesis is improved when high dynamic range synthesis is carried out.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present disclosure.
Fig. 2 is an exemplary diagram of triggering a shooting request in the embodiment of the present application.
Fig. 3 is an exemplary diagram of a sub-region set obtained by combining in the embodiment of the present application.
Fig. 4 is a schematic diagram of a high dynamic composite image synthesized in the embodiment of the present application.
Fig. 5 is another schematic flowchart of an image processing method according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is by way of example of particular embodiments of the application and should not be construed as limiting the application to other particular embodiments not specifically described herein.
The embodiment of the application relates to an image processing method, an image processing device, a storage medium and an electronic device, wherein an execution subject of the image processing method can be the image processing device provided by the embodiment of the application or the electronic device integrated with the image processing device, and the image processing device can be realized in a hardware or software mode. The electronic device may be a device with processing capability configured with a processor, such as a smart phone, a tablet computer, a palm computer, a notebook computer, or a desktop computer.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application, and a specific flow of the image processing method according to the embodiment of the present application may be as follows:
in 101, a plurality of images with different exposure parameters are acquired.
In the embodiment of the application, the electronic device can shoot a shooting scene according to different exposure parameters when receiving a triggered shooting request, so that a plurality of images with different exposure parameters are obtained. The shooting request can be triggered in various ways, such as by a virtual key, by a physical key, by a voice command, and the like.
For example, referring to fig. 2, after the user operates the electronic device to start a photographing application (for example, a system application "camera" of the electronic device), and moves the electronic device so that a camera of the electronic device is aligned with a photographing scene, the user may trigger a photographing request by clicking a "photographing" key (which is a virtual key) provided by the application interface.
For another example, after the user operates the electronic device to start the photographing application, the user can speak the voice command "photograph" to trigger the photographing request after the camera of the electronic device is aligned with the photographing scene by moving the electronic device, or directly click a physical photographing key set in the electronic device to trigger the photographing request.
After receiving the triggered shooting request, the electronic device immediately responds to the received shooting request, that is, shoots a shooting scene according to different exposure parameters, and acquires a plurality of images with different exposure parameters corresponding to the shooting scene, wherein the images corresponding to the different exposure parameters are different only in image brightness information due to different exposure parameters, but the image contents of the images are the same, that is, the image contents of the shooting scene are the same. The exposure parameters include, but are not limited to, sensitivity, shutter speed, and aperture size.
As an optional implementation manner, when the electronic device performs shooting, N sets of different exposure parameters that are locally pre-stored may be sequentially obtained, and when one exposure parameter is obtained each time, a shooting scene is shot according to the obtained exposure parameter and in combination with other shooting parameters, and so on, a plurality of images corresponding to the N sets of different exposure parameters are obtained through shooting. The electronic device sequentially acquires the pre-stored different exposure parameters, and the different exposure parameters can be acquired according to the sequence that the exposure parameters cause the brightness information of the image to be from low to high. Besides, the plurality of images obtained by shooting are identical in corresponding other shooting parameters except for different exposure parameters.
For example, two sets of exposure parameters, namely a first exposure parameter and a second exposure parameter, are pre-stored locally in the electronic device, where the brightness of an image captured by using the first exposure parameter is lower than the brightness captured by using the second exposure parameter, when a received capture request is responded, the first exposure parameter is first obtained, a capture scene is captured according to the first exposure parameter and in combination with other capture parameters, then the second exposure parameter is obtained, and the capture scene is captured according to the second exposure parameter and in combination with other capture parameters.
As another optional implementation manner, when the electronic device performs shooting, a shooting scene may be shot in a manner of surrounding exposure, specifically, first, photometry is performed on the shooting scene to obtain a photometric value of the shooting scene, then, according to a mapping relationship between a preset photometric value and an exposure parameter, an exposure parameter corresponding to the photometric value is determined, and the shooting scene is shot according to the determined exposure parameter; and then, on the basis of the determined exposure parameters, the determined exposure parameters are promoted and attenuated according to a preset step value, and the shooting scene is shot according to the promoted exposure parameters and the attenuated exposure parameters respectively, so that a plurality of images corresponding to different exposure parameters are obtained. The times of lifting and attenuating the exposure parameters are not limited, for example, the exposure parameters can be attenuated once and lifted once, so that three images with different exposure parameters are obtained; for another example, the attenuation may be performed twice and the boost may be performed twice, so that five images with different exposure parameters are obtained.
At 102, each image is adaptively segmented into a plurality of sub-regions.
After acquiring a plurality of images with different exposure parameters, the electronic device performs adaptive segmentation on each acquired image according to a preset adaptive segmentation algorithm, so that each image is segmented into a plurality of sub-regions. It should be noted that the number of the sub-regions obtained by dividing is related to the actual content of the image, and is not a fixed value.
The embodiment of the present application is not particularly limited to which adaptive segmentation algorithm is used to perform adaptive segmentation on the obtained image, and may be selected by a person skilled in the art according to actual needs.
In 103, the sub-regions with the same image content are combined into a sub-region set to obtain a plurality of sub-region sets.
As is clear from the above, since the entire image contents of the plurality of acquired images are the same, the image contents of the sub-regions at the corresponding positions of the different images are also the same after the plurality of sub-regions are divided.
In this embodiment, the electronic device further combines the sub-regions with the same image content into a sub-region set, thereby obtaining a plurality of sub-region sets.
For example, referring to fig. 3, it is assumed that the electronic device acquires 3 images, namely an image a, an image B, and an image C, and then the electronic device divides each of the image a, the image B, and the image C into 3 regions, where the image a is divided into a subregion 1, a subregion 2, and a subregion 3, the image B is divided into subregions 4, subregions 5, and subregions 6, the image C is divided into a subregion 7, a subregion 8, and a subregion 9, and then the subregion 1, the subregion 4, and the subregion 7 with the same image content are combined into a subregion set of subregions 1, the subregion 2, the subregion 4, and the subregion 8 with the same image content are combined into a set of subregions 2, and the subregion 3, the subregion 6, and the subregion 9 with the same image content are combined into a set of subregions 3.
In 104, each sub-region set is aligned, and high dynamic range synthesis is performed on each sub-region set to obtain a corresponding high dynamic synthesis sub-region.
It should be noted that although the image content is the same as a whole, details in the image may not be aligned due to movement of the electronic device at the time of shooting or movement of the object itself in the shooting scene.
Thus, the electronic device aligns each set of sub-regions separately. For example, for each sub-region set, the electronic device determines a reference sub-region therefrom, and then aligns non-reference sub-regions in the sub-region set with the reference sub-regions, thereby achieving alignment of the sub-region set.
After aligning each sub-region set, for each sub-region set, the electronic device performs high dynamic range synthesis according to a preset high dynamic range synthesis algorithm to obtain a high dynamic synthesis sub-region corresponding to each sub-region set.
The high dynamic range synthesis by using the synthesis high dynamic range synthesis algorithm is not particularly limited in the present application, and may be selected by a person of ordinary skill in the art according to actual needs.
At 105, a high motion composite image of the plurality of images is generated from the high motion composite sub-regions corresponding to the plurality of sets of sub-regions.
In the embodiment of the application, after the electronic device obtains the high-dynamic synthesis sub-region of each sub-region set, the electronic device further generates the high-dynamic synthesis images of the acquired multiple images according to the high-dynamic synthesis sub-regions corresponding to the multiple sub-region sets. For example, please refer to fig. 3 and fig. 4 in combination, the electronic device synthesizes to obtain a high dynamic synthesis sub-region 1 corresponding to the sub-region set 1, synthesizes to obtain a high dynamic synthesis sub-region 2 corresponding to the sub-region set 2, and synthesizes to obtain a high dynamic synthesis sub-region 3 corresponding to the sub-region set 3, and then, according to the position of the original sub-region corresponding to each high dynamic synthesis sub-region in the original image, the electronic device splices the high dynamic synthesis sub-region 1, the high dynamic synthesis sub-region 2, and the high dynamic synthesis sub-region 3 into a new image, which is used as a high dynamic synthesis image of the acquired multiple images.
As can be seen from the above, in the present application, a plurality of images with different exposure parameters are obtained, then each image is subjected to adaptive segmentation, each image is segmented into a plurality of sub-regions, then the sub-regions with the same image content are combined into a sub-region set, a plurality of sub-region sets are obtained, then each sub-region set is aligned with each sub-region set, high dynamic range synthesis is performed on each sub-region set, a corresponding high dynamic synthesis sub-region is obtained, and finally a high dynamic synthesis image of a plurality of images is generated according to the high dynamic synthesis sub-regions corresponding to the plurality of sub-region sets. That is, in the embodiment of the present application, the electronic device performs local region alignment on the image, thereby eliminating "ghost", so that when performing high dynamic range synthesis, the quality of synthesis can be improved.
In one embodiment, aligning each set of sub-regions comprises:
(1) determining a reference sub-region from the set of sub-regions;
(2) carrying out characteristic point identification on the reference sub-area and the non-reference sub-areas in the sub-area set to obtain a matched characteristic point pair of each non-reference sub-area and the reference sub-area;
(3) solving a homography matrix according to the matching characteristic point pairs corresponding to each non-reference subarea;
(4) and performing affine transformation according to the homography matrix corresponding to each non-reference sub-region so as to align each non-reference sub-region with the reference sub-region.
The following description will take the procedure of aligning the sub-region sets as an example.
The electronic equipment firstly determines a reference sub-region from the sub-region set, and then aligns other sub-regions in the sub-region set with the reference sub-region by taking the reference sub-region as a reference, so as to realize the alignment of the sub-region set. It should be noted that how to select the reference sub-region is not specifically limited in this application, for example, a sub-region may be randomly selected from the sub-region set as the reference sub-region, and for example, a sub-region with the highest definition may be determined from the sub-region set as the reference sub-region.
After the reference sub-regions are determined, the electronic device further performs feature point identification on the reference sub-regions and the non-reference sub-regions in the sub-region set according to a preset feature point identification algorithm, and matches the identified feature points to obtain matched feature point pairs of each non-reference sub-region and the reference sub-region. The method for identifying the feature points by using the feature point identification algorithm is not particularly limited in the present application, and can be selected by a person of ordinary skill in the art according to actual needs, including but not limited to a SIFT algorithm, a Harris corner algorithm, and the like.
After matching characteristic point pairs of each non-reference sub-region and the reference sub-region are obtained through matching, the electronic device further obtains a homography matrix according to the matching characteristic point pairs corresponding to each non-reference sub-region, and then performs affine transformation according to the homography matrix corresponding to each non-reference sub-region to align each non-reference sub-region and the reference sub-region, so that alignment of the sub-region sets is achieved.
In one embodiment, high dynamic range synthesis is performed on each set of sub-regions, including:
(1) acquiring the weight for synthesizing the high dynamic range according to the brightness information of the same pixel position in the sub-region set;
(2) and performing high dynamic range synthesis on the sub-region sets according to the weights.
In the embodiment of the present application, in consideration of the fact that exposure parameters of sub-regions in the same sub-region set are different, luminance information (for example, luminance values) of the same pixel position in different sub-regions can reflect differences of a shooting scene under different exposure parameters. Therefore, the electronic device can analyze the weight for high dynamic range synthesis of the sub-region set according to the brightness information of the same pixel position in the sub-region set.
After determining the weight for high dynamic range synthesis, high dynamic range synthesis can be performed on the sub-region set according to the weight, so as to obtain a corresponding high dynamic synthesis sub-region.
In one embodiment, the high dynamic range synthesis of the sub-region set according to the aforementioned weights comprises:
(1) filtering the weight to obtain a filtered weight;
(2) and performing high dynamic range synthesis on the sub-region set according to the filtered weight.
It should be noted that for most scenes, only the overexposed part and the underexposed part in the image need to be highly dynamically synthesized, and the synthesis of the normally exposed part may have the effect of losing local details and the like.
Therefore, when the high dynamic range synthesis is performed, the electronic device firstly performs filtering processing on the weight, filters the weight corresponding to the normal exposure part, and obtains the filtered weight. Then, high dynamic range synthesis is carried out on the sub-region set according to the filtered weight, and a corresponding high dynamic synthesis sub-region can be obtained.
In an embodiment, after generating a high dynamic composite image of a plurality of images according to the high dynamic composite sub-regions corresponding to the plurality of sub-region sets, the method further includes:
and carrying out video coding according to the high-dynamic synthetic image to obtain a high-dynamic video.
In the embodiment of the application, after the electronic device obtains the high-dynamic composite image through synthesis, the electronic device further performs video coding according to the high-dynamic composite image to obtain the high-dynamic composite video.
In this embodiment, no specific limitation is imposed on what video coding format is used for video coding, and a person skilled in the art can select a video coding format according to actual needs, including but not limited to h.264, h.265, MPEG-4, and the like.
In one embodiment, video encoding is performed according to a high-motion composite image to obtain a high-motion video, and the method includes:
(1) carrying out smoothing treatment on the high dynamic synthetic image to obtain a smoothed high dynamic synthetic image;
(2) and carrying out video coding according to the smoothed high-dynamic synthetic image to obtain a high-dynamic video.
In the embodiment of the present application, when performing video coding according to a high dynamic composite image, an electronic device first performs smoothing processing on the high dynamic composite image to obtain a smoothed high dynamic composite image. The smoothing method is not particularly limited in the embodiments of the present application.
For example, in the embodiment of the present application, the electronic device performs a smoothing process on adjacent portions between the sub-regions corresponding to the segmentation in the high dynamic synthetic image in a bilinear interpolation manner, so as to make a transition between the sub-regions of the high dynamic synthetic image smoother.
In one embodiment, acquiring a plurality of images with different exposure parameters includes:
(1) when a shooting request aiming at a shooting scene is received, identifying the backlight environment of the shooting scene;
(2) and when the shooting scene is identified to be in a backlight environment, shooting the shooting scene according to different exposure parameters to obtain a plurality of images with different exposure parameters.
In the embodiment of the application, when a shooting scene with too large difference in brightness, such as a backlight environment, is taken, details of bright and/or dark parts of a shot image are easily lost. Therefore, when receiving a shooting request for a shooting scene, the electronic device may perform backlight environment identification on the shooting scene, so that when it is identified that the shooting scene is in a backlight environment, the shooting scene is shot according to different exposure parameters to obtain a plurality of images with different exposure parameters, and the images are aligned and synthesized according to the method for locally aligning and synthesizing the images provided in the above embodiment of the present application, so as to obtain a high dynamic range image of the shooting scene.
It should be noted that the backlight environment recognition of the shooting scene can be implemented in various ways, and as an alternative implementation, the environment parameters of the shooting scene may be acquired, and the backlight environment recognition of the shooting scene is performed according to the acquired environment parameters.
The electronic equipment and the shooting scene are in the same environment, so that the environmental parameters of the electronic equipment can be acquired, and the environmental parameters of the electronic equipment are used as the environmental parameters of the shooting scene. The environmental parameters include, but are not limited to, time information, time zone information of a location where the electronic device is located, location information, weather information, and orientation information of the electronic device. After the environmental parameters of the shooting scene are acquired, the acquired environmental parameters can be input into a pre-trained support vector machine classifier, and the support vector machine classifier classifies according to the input environmental parameters to determine whether the shooting scene is in a backlight environment.
As another optional implementation, histogram information of the shooting scene in a preset channel may be acquired, and backlight environment recognition may be performed on the shooting scene according to the acquired histogram information.
The preset channels comprise R, G, B, when histogram information of a shooting scene is acquired, a preview image of the shooting scene can be acquired, then the histogram information of the preview image in R, G, B three channels is acquired, and the acquired histogram information of R, G, B three channels is used as the histogram information of the shooting scene in the preset channels. And then, counting the histogram information of the shooting scene to obtain a statistical result. Wherein, the number of pixels under different brightness is specifically counted. And after the statistical result is obtained, judging whether the statistical result meets a preset condition, if so, judging that the shooting scene is in a backlight environment, otherwise, judging that the shooting scene is not in the backlight environment. For example, the preset conditions may be: the pixel number of the first brightness interval and the pixel number of the second brightness interval both reach a preset number threshold, and the lowest brightness is smaller than the first preset brightness threshold and/or the highest brightness is larger than the second preset brightness threshold, wherein the preset number threshold, the first preset brightness threshold and the second preset brightness threshold are empirical parameters, and appropriate values can be obtained by ordinary technicians in the field according to actual needs.
In the embodiment of the application, when the shooting scene is identified to be in the backlight environment, a plurality of different exposure parameters can be set according to the current backlight degree, so that the shooting scene is shot according to the set plurality of different exposure parameters, and a plurality of images with different exposure parameters are obtained.
The backlight degree can be obtained by the support vector machine classifier when the result of whether the shooting scene is in the backlight environment is output, and when the output result is the backlight environment, the corresponding backlight degree is synchronously output.
When the electronic equipment acquires the result that the shooting scene output by the support vector machine classifier is in the backlight environment, the electronic equipment simultaneously acquires the backlight degree output by the support vector machine classifier as the current backlight degree. And then, setting a plurality of different exposure parameters corresponding to the current backlight degree according to the mapping relation between the pre-stored backlight degree and the exposure parameters. Therefore, when the electronic equipment acquires a plurality of images with different exposure parameters, the electronic equipment can respectively shoot a shooting scene according to the set plurality of exposure parameters to obtain a plurality of images corresponding to different exposure parameters, the brightness information of the plurality of images with different exposure parameters acquired in the way is different and is from dark to bright, but the image contents of the plurality of images with different exposure parameters are the same, namely the image contents of the shooting scene are the same.
Referring to fig. 5, the flow of the image processing method provided by the present application may also be:
in 201, the electronic device acquires a plurality of images with different exposure parameters.
In the embodiment of the application, the electronic device can shoot a shooting scene according to different exposure parameters when receiving a triggered shooting request, so that a plurality of images with different exposure parameters are obtained. The shooting request can be triggered in various ways, such as by a virtual key, by a physical key, by a voice command, and the like.
For example, referring to fig. 2, after the user operates the electronic device to start a photographing application (e.g., a system application "camera" of the electronic device), the user can trigger a photographing request by clicking a "photographing" key (which is a virtual key) provided by the application interface after moving the electronic device so that a camera of the electronic device is aligned with a photographing scene.
For another example, after the user operates the electronic device to start the photographing application, the user can speak the voice instruction "photograph" and trigger the photographing request after the camera of the electronic device is aligned with the photographing scene by moving the electronic device, or directly click a physical photographing key set in the electronic device to trigger the photographing request.
After receiving the triggered shooting request, the electronic device immediately responds to the received shooting request, that is, shoots a shooting scene according to different exposure parameters, and acquires a plurality of images with different exposure parameters corresponding to the shooting scene, wherein the images corresponding to the different exposure parameters are different only in image brightness information due to different exposure parameters, but the image contents of the images are the same, that is, the image contents of the shooting scene are the same. The exposure parameters include, but are not limited to, sensitivity, shutter speed, and aperture size.
As an optional implementation manner, when the electronic device performs shooting, N sets of different exposure parameters that are locally pre-stored may be sequentially obtained, and when one exposure parameter is obtained each time, a shooting scene is shot according to the obtained exposure parameter and in combination with other shooting parameters, and so on, a plurality of images corresponding to the N sets of different exposure parameters are obtained through shooting. The electronic device sequentially acquires the pre-stored different exposure parameters, and the different exposure parameters can be acquired according to the sequence that the exposure parameters cause the brightness information of the image to be from low to high. Besides, the plurality of images obtained by shooting are identical in corresponding other shooting parameters except for different exposure parameters.
For example, two sets of exposure parameters, namely a first exposure parameter and a second exposure parameter, are pre-stored locally in the electronic device, where brightness of an image obtained by shooting with the first exposure parameter is lower than brightness obtained by shooting with the second exposure parameter, when a received shooting request is responded, the first exposure parameter is obtained first, a shooting scene is shot according to the first exposure parameter and in combination with other shooting parameters, then the second exposure parameter is obtained, and the shooting scene is shot according to the second exposure parameter and in combination with other shooting parameters.
As another optional implementation, when the electronic device performs shooting, the electronic device may perform shooting on a shooting scene in a manner of surrounding exposure, specifically, perform photometry on the shooting scene to obtain a photometric value of the shooting scene, determine an exposure parameter corresponding to the photometric value according to a mapping relationship between a preset photometric value and the exposure parameter, and perform shooting on the shooting scene according to the determined exposure parameter; and then, on the basis of the determined exposure parameters, the determined exposure parameters are promoted and attenuated according to a preset step value, and the shooting scene is shot according to the promoted exposure parameters and the attenuated exposure parameters respectively, so that a plurality of images corresponding to different exposure parameters are obtained. The number of times of increasing and attenuating the exposure parameters is not limited, for example, attenuation and increase can be performed once, so that three images with different exposure parameters are obtained; for another example, the attenuation may be performed twice and the boost may be performed twice, so that five images with different exposure parameters are obtained.
At 202, the electronic device performs adaptive segmentation on each image, and segments each image into a plurality of sub-regions.
After acquiring a plurality of images with different exposure parameters, the electronic device performs adaptive segmentation on each acquired image according to a preset adaptive segmentation algorithm, so that each image is segmented into a plurality of sub-regions. It should be noted that the number of the sub-regions obtained by dividing is related to the actual content of the image, and is not a fixed value.
In the embodiment of the present application, no specific limitation is imposed on what kind of adaptive segmentation algorithm is used to perform adaptive segmentation on the acquired image, and a person skilled in the art can select the adaptive segmentation algorithm according to actual needs.
At 203, the electronic device combines sub-regions with the same image content into a sub-region set, resulting in a plurality of sub-region sets.
As is clear from the above, since the entire image contents of the plurality of acquired images are the same, the image contents of the sub-regions at the corresponding positions of the different images are also the same after the plurality of sub-regions are divided.
In this embodiment, the electronic device further combines the sub-regions with the same image content into a sub-region set, thereby obtaining a plurality of sub-region sets.
For example, referring to fig. 3, it is assumed that the electronic device acquires 3 images, namely an image a, an image B, and an image C, and then the electronic device divides each of the image a, the image B, and the image C into 3 regions, where the image a is divided into a subregion 1, a subregion 2, and a subregion 3, the image B is divided into subregions 4, subregions 5, and subregions 6, the image C is divided into a subregion 7, a subregion 8, and a subregion 9, and then the subregion 1, the subregion 4, and the subregion 7 with the same image content are combined into a subregion set of subregions 1, the subregion 2, the subregion 4, and the subregion 8 with the same image content are combined into a set of subregions 2, and the subregion 3, the subregion 6, and the subregion 9 with the same image content are combined into a set of subregions 3.
In 204, for each sub-region set, the electronic device determines a reference sub-region from the sub-region set, performs feature point identification on the reference sub-region and non-reference sub-regions in the sub-region set to obtain a matching feature point pair of each non-reference sub-region and the reference sub-region, finds a homography matrix according to the matching feature point pair corresponding to each non-reference sub-region, and performs affine transformation according to the homography matrix corresponding to each non-reference sub-region to align each non-reference sub-region with the reference sub-region.
It should be noted that although the image content is the same as a whole, details in the image may not be aligned due to movement of the electronic device at the time of shooting or movement of the object itself in the shooting scene.
Thus, the electronic device aligns each set of sub-regions separately. The electronic equipment firstly determines a reference sub-area from the sub-area set, and then aligns other sub-areas in the sub-area set with the reference sub-area by taking the reference sub-area as a reference so as to realize the alignment of the sub-area set. It should be noted that, as to how to select the reference sub-region, no particular limitation is imposed in this application, for example, a sub-region may be randomly selected from the sub-region set as the reference sub-region, and for example, a sub-region with the highest definition may be determined from the sub-region set as the reference sub-region.
After the reference sub-regions are determined, the electronic device further performs feature point identification on the reference sub-regions and the non-reference sub-regions in the sub-region set according to a preset feature point identification algorithm, and matches the identified feature points to obtain matched feature point pairs of each non-reference sub-region and the reference sub-region. The method for identifying the feature points by using the feature point identification algorithm is not particularly limited in the present application, and can be selected by a person of ordinary skill in the art according to actual needs, including but not limited to a SIFT algorithm and a Harris corner algorithm.
After matching characteristic point pairs of each non-reference sub-region and the reference sub-region are obtained through matching, the electronic device further obtains a homography matrix according to the matching characteristic point pairs corresponding to each non-reference sub-region, and then performs affine transformation according to the homography matrix corresponding to each non-reference sub-region to align each non-reference sub-region and the reference sub-region, so that alignment of the sub-region sets is achieved.
In 205, the electronic device obtains a weight for high dynamic range synthesis according to the brightness information of the same pixel position in the sub-region set, and performs filtering processing on the weight to obtain a filtered weight.
In 206, the electronic device performs high dynamic range synthesis on the sub-region set according to the filtered weight to obtain a corresponding high dynamic synthesis sub-region.
In the embodiment of the present application, in consideration of the fact that exposure parameters of sub-regions in the same sub-region set are different, luminance information (for example, luminance values) of the same pixel position in different sub-regions can reflect differences of a shooting scene under different exposure parameters. Therefore, the electronic device can analyze the weight for high dynamic range synthesis of the sub-region set according to the brightness information of the same pixel position in the sub-region set.
After determining the weight for high dynamic range synthesis, high dynamic range synthesis can be performed on the sub-region set according to the weight, so as to obtain a corresponding high dynamic synthesis sub-region.
It should be noted that for most scenes, only the overexposed part and the underexposed part in the image need to be highly dynamically synthesized, and the synthesis of the normally exposed part may have the effect of losing local details and the like.
Therefore, when the high dynamic range synthesis is performed, the electronic device firstly performs filtering processing on the weight, filters the weight corresponding to the normal exposure part, and obtains the filtered weight. Then, high dynamic range synthesis is carried out on the sub-region set according to the filtered weight, and a corresponding high dynamic synthesis sub-region can be obtained.
At 207, the electronic device generates a high dynamic composite image of the plurality of images from the high dynamic composite sub-regions corresponding to the plurality of sets of sub-regions.
In the embodiment of the application, after the electronic device obtains the high-dynamic synthesis sub-region of each sub-region set, the electronic device further generates the high-dynamic synthesis images of the acquired multiple images according to the high-dynamic synthesis sub-regions corresponding to the multiple sub-region sets. For example, referring to fig. 3 and fig. 4 in combination, the electronic device synthesizes to obtain a high dynamic synthesis sub-region 1 corresponding to the sub-region set 1, synthesizes to obtain a high dynamic synthesis sub-region 2 corresponding to the sub-region set 2, and synthesizes to obtain a high dynamic synthesis sub-region 3 corresponding to the sub-region set 3, and then, according to the position of the original sub-region corresponding to each high dynamic synthesis sub-region in the original image, the electronic device splices the high dynamic synthesis sub-region 1, the high dynamic synthesis sub-region 2, and the high dynamic synthesis sub-region 3 into a new image, which is used as a high dynamic synthesis image of the acquired multiple images.
At 208, the electronic device smoothes the high-motion composite image to obtain a smoothed high-motion composite image.
In the embodiment of the application, after obtaining the high dynamic synthetic image, the electronic device further performs smoothing processing on the high dynamic synthetic image to obtain a smoothed high dynamic synthetic image. In this embodiment, no specific limitation is imposed on what smoothing method is adopted.
For example, in the embodiment of the present application, the electronic device performs a smoothing process on an adjacent portion between the correspondingly segmented sub-regions in the high dynamic synthetic image in a bilinear interpolation manner, so as to make a transition between the sub-regions of the high dynamic synthetic image smoother.
In one embodiment, an image processing apparatus is also provided. Referring to fig. 6, fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus is applied to an electronic device, and includes an image acquisition module 301, an image segmentation module 302, a region combination module 303, an alignment synthesis module 304, and an image generation module 305, as follows:
an image obtaining module 301, configured to obtain a plurality of images with different exposure parameters;
an image segmentation module 302, configured to perform adaptive segmentation on each image, and segment each image into a plurality of sub-regions;
the region combination module 303 is configured to combine the sub-regions with the same image content into a sub-region set, so as to obtain a plurality of sub-region sets;
an alignment synthesis module 304, configured to align each sub-region set, and perform high dynamic range synthesis on each sub-region set to obtain corresponding high dynamic synthesis sub-regions;
an image generating module 305, configured to generate a high-dynamic composite image of the multiple images according to the high-dynamic composite sub-regions corresponding to the multiple sub-region sets.
In one embodiment, in aligning each set of sub-regions, the alignment synthesis module 304 is to:
determining a reference sub-region from the set of sub-regions;
carrying out characteristic point identification on the reference sub-area and the non-reference sub-areas in the sub-area set to obtain a matched characteristic point pair of each non-reference sub-area and the reference sub-area;
solving a homography matrix according to the matching characteristic point pairs corresponding to each non-reference subarea;
and performing affine transformation according to the homography matrix corresponding to each non-reference sub-region so as to align each non-reference sub-region with the reference sub-region.
In one embodiment, in high dynamic range synthesis of each set of sub-regions, the alignment synthesis module 304 is configured to:
acquiring the weight for high dynamic range synthesis according to the brightness information of the same pixel position in the sub-region set;
and performing high dynamic range synthesis on the sub-region sets according to the weights.
In an embodiment, when performing high dynamic range synthesis on the sub-region set according to the aforementioned weights, the method is configured to:
filtering the weight to obtain a filtered weight;
and performing high dynamic range synthesis on the sub-region set according to the filtered weight.
In an embodiment, the image processing apparatus provided by the present application further includes a video encoding module, after generating a high dynamic composite image of the plurality of images according to the high dynamic composite sub-regions corresponding to the plurality of sets of sub-regions, configured to:
and carrying out video coding according to the high-dynamic synthetic image to obtain a high-dynamic video.
In an embodiment, when the high dynamic video is obtained by performing video coding according to the high dynamic composite image, the video coding module is configured to:
carrying out smoothing treatment on the high dynamic synthetic image to obtain a smoothed high dynamic synthetic image;
and carrying out video coding according to the smoothed high-dynamic synthetic image to obtain a high-dynamic video.
In one embodiment, when acquiring a plurality of images with different exposure parameters, the image acquisition module 301 is configured to:
when a shooting request aiming at a shooting scene is received, identifying the backlight environment of the shooting scene;
and when the shooting scene is identified to be in a backlight environment, shooting the shooting scene according to different exposure parameters to obtain a plurality of images with different exposure parameters.
It should be noted that the image processing apparatus provided in the embodiment of the present application and the image processing method in the foregoing embodiment belong to the same concept, and any method provided in the embodiment of the image processing method may be executed on the image processing apparatus, and the specific implementation process thereof is described in the foregoing embodiment, and is not described herein again.
In an embodiment, an electronic device is further provided, and referring to fig. 7, the electronic device includes a processor 401 and a memory 402.
The processor 401 in the present embodiment is a general purpose processor, such as an ARM architecture processor.
The memory 402 stores a computer program, which may be a high speed random access memory, and may also be a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the computer programs in the memory 402 to implement the following functions:
acquiring a plurality of images with different exposure parameters;
respectively carrying out self-adaptive segmentation on each image, and segmenting each image into a plurality of sub-regions;
combining the subareas with the same image content into a subarea set to obtain a plurality of subarea sets;
respectively aligning each sub-region set, and performing high dynamic range synthesis on each sub-region set to obtain corresponding high dynamic synthesis sub-regions;
and generating a high-dynamic composite image of the plurality of images according to the high-dynamic composite subareas corresponding to the plurality of subarea sets.
In one embodiment, in aligning each set of sub-regions, the processor 401 is configured to perform:
determining a reference sub-region from the set of sub-regions;
carrying out characteristic point identification on the reference sub-area and the non-reference sub-areas in the sub-area set to obtain a matched characteristic point pair of each non-reference sub-area and the reference sub-area;
solving a homography matrix according to the matching characteristic point pairs corresponding to each non-reference subarea;
and carrying out affine transformation according to the homography matrix corresponding to each non-reference sub-area so as to align each non-reference sub-area with the reference sub-area.
In one embodiment, in high dynamic range synthesis for each set of sub-regions, the processor 401 is configured to perform:
acquiring the weight for high dynamic range synthesis according to the brightness information of the same pixel position in the sub-region set;
and performing high dynamic range synthesis on the sub-region sets according to the weights.
In an embodiment, when performing high dynamic range synthesis on the set of sub-regions according to the aforementioned weights, the processor 401 is configured to perform:
filtering the weight to obtain the filtered weight;
and performing high dynamic range synthesis on the sub-region set according to the filtered weight.
In an embodiment, after generating a high dynamic composite image of the plurality of images from the high dynamic composite sub-regions corresponding to the plurality of sets of sub-regions, the processor 401 is further configured to perform:
and carrying out video coding according to the high-dynamic synthetic image to obtain a high-dynamic video.
In an embodiment, when performing video coding according to a high motion composite image, and obtaining a high motion video, the processor 401 is configured to perform:
carrying out smoothing treatment on the high dynamic synthetic image to obtain a smoothed high dynamic synthetic image;
and carrying out video coding according to the smoothed high-dynamic synthetic image to obtain a high-dynamic video.
In one embodiment, when acquiring a plurality of images with different exposure parameters, the processor 401 is configured to:
when a shooting request aiming at a shooting scene is received, identifying the backlight environment of the shooting scene;
and when the shooting scene is identified to be in a backlight environment, shooting the shooting scene according to different exposure parameters to obtain a plurality of images with different exposure parameters.
It should be noted that the electronic device provided in the embodiment of the present application and the image processing method in the foregoing embodiment belong to the same concept, and any method provided in the embodiment of the image processing method may be run on the electronic device, and a specific implementation process thereof is described in detail in the embodiment of the feature extraction method, and is not described herein again.
It should be noted that, for the image processing method of the embodiment of the present application, it can be understood by a person skilled in the art that all or part of the process of implementing the image processing method of the embodiment of the present application can be completed by controlling the relevant hardware through a computer program, where the computer program can be stored in a computer-readable storage medium, such as a memory of an electronic device, and executed by a processor and/or a dedicated voice recognition chip in the electronic device, and the process of executing the process can include, for example, the process of the embodiment of the image processing method. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random access memory, etc.
The foregoing detailed description has provided an image processing method, an image processing apparatus, a storage medium, and an electronic device according to embodiments of the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, the specific implementation manner and the application scope may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image processing method applied to an electronic device, comprising:
acquiring a plurality of images with different exposure parameters;
performing self-adaptive segmentation on each image according to the image content of each image, and segmenting each image into a plurality of sub-regions;
combining the subregions with the same image content into a subregion set to obtain a plurality of subregion sets;
respectively aligning each sub-region set, and performing high dynamic range synthesis on each sub-region set to obtain corresponding high dynamic synthesis sub-regions;
and generating a high-dynamic synthetic image of the plurality of images according to the high-dynamic synthetic subareas corresponding to the plurality of subarea sets.
2. The method of claim 1, wherein said aligning each set of sub-regions comprises:
determining a reference sub-region from the set of sub-regions;
performing characteristic point identification on the reference sub-regions and the non-reference sub-regions in the sub-region set to obtain matched characteristic point pairs of each non-reference sub-region and the reference sub-region;
solving a homography matrix according to the matching characteristic point pairs corresponding to each non-reference subarea;
and carrying out affine transformation according to the homography matrix corresponding to each non-reference sub-area so as to align each non-reference sub-area with the reference sub-area.
3. The method according to claim 1, wherein the performing high dynamic range synthesis on each sub-region set comprises:
acquiring the weight for high dynamic range synthesis according to the brightness information of the same pixel position in the sub-region set;
and performing high dynamic range synthesis on the sub-region set according to the weight.
4. The method of claim 3, wherein the high dynamic range synthesizing the set of sub-regions according to the weights comprises:
filtering the weight to obtain a filtered weight;
and performing high dynamic range synthesis on the sub-region set according to the filtered weight.
5. The image processing method according to any one of claims 1 to 4, wherein after generating a high-dynamic synthesized image of the plurality of images from the high-dynamic synthesized sub-regions corresponding to the plurality of sets of sub-regions, the method further comprises:
and carrying out video coding according to the high-dynamic synthetic image to obtain a high-dynamic video.
6. The image processing method according to claim 5, wherein said performing video coding according to the high-motion composite image to obtain a high-motion video comprises:
carrying out smoothing treatment on the high dynamic synthetic image to obtain a smoothed high dynamic synthetic image;
and carrying out video coding according to the smoothed high-dynamic synthetic image to obtain a high-dynamic video.
7. The image processing method according to any one of claims 1 to 4, wherein the acquiring a plurality of images with different exposure parameters includes:
when a shooting request for a shooting scene is received, carrying out backlight environment identification on the shooting scene;
and when the shooting scene is identified to be in a backlight environment, shooting the shooting scene according to different exposure parameters to obtain a plurality of images with different exposure parameters.
8. An image processing apparatus applied to an electronic device, comprising:
the image acquisition module is used for acquiring a plurality of images with different exposure parameters;
the image segmentation module is used for performing self-adaptive segmentation on each image according to the image content of each image and segmenting each image into a plurality of sub-regions;
the area combination module is used for combining the sub-areas with the same image content into a sub-area set to obtain a plurality of sub-area sets;
the alignment synthesis module is used for respectively aligning each sub-region set and carrying out high dynamic range synthesis on each sub-region set to obtain a corresponding high dynamic synthesis sub-region;
and the image generation module is used for generating a high-dynamic synthetic image of the plurality of images according to the high-dynamic synthetic subareas corresponding to the plurality of subarea sets.
9. A storage medium on which a computer program is stored, characterized in that the computer program, when loaded by a processor, executes an image processing method according to any one of claims 1 to 7.
10. An electronic device comprising a processor and a memory, the memory storing a computer program, wherein the processor is adapted to perform the image processing method according to any one of claims 1 to 7 by loading the computer program.
CN201911252972.9A 2019-12-09 2019-12-09 Image processing method, image processing device, storage medium and electronic equipment Active CN110971841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911252972.9A CN110971841B (en) 2019-12-09 2019-12-09 Image processing method, image processing device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911252972.9A CN110971841B (en) 2019-12-09 2019-12-09 Image processing method, image processing device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110971841A CN110971841A (en) 2020-04-07
CN110971841B true CN110971841B (en) 2022-07-15

Family

ID=70033528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911252972.9A Active CN110971841B (en) 2019-12-09 2019-12-09 Image processing method, image processing device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110971841B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111083389B (en) * 2019-12-27 2021-11-16 维沃移动通信有限公司 Method and device for shooting image
CN112017218A (en) * 2020-09-09 2020-12-01 杭州海康威视数字技术股份有限公司 Image registration method and device, electronic equipment and storage medium
CN112822426B (en) * 2020-12-30 2022-08-30 上海掌门科技有限公司 Method and equipment for generating high dynamic range image
WO2023124201A1 (en) * 2021-12-29 2023-07-06 荣耀终端有限公司 Image processing method and electronic device
CN114554050B (en) * 2022-02-08 2024-02-27 维沃移动通信有限公司 Image processing method, device and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104469176A (en) * 2014-12-02 2015-03-25 安徽拓威科技有限公司 Anti-glare device of intelligent toll gate and control method of anti-glare device
CN106303269A (en) * 2015-12-28 2017-01-04 北京智谷睿拓技术服务有限公司 Image acquisition control method and device, image capture device
JP2017195645A (en) * 2017-08-07 2017-10-26 日本電気株式会社 Video encoding device, video decoding device, video system, video encoding method, and video encoding program
EP3340165A1 (en) * 2016-12-20 2018-06-27 Thomson Licensing Method of color gamut mapping input colors of an input ldr content into output colors forming an output hdr content
CN109166077A (en) * 2018-08-17 2019-01-08 广州视源电子科技股份有限公司 Image alignment method, apparatus, readable storage medium storing program for executing and computer equipment
CN109242811A (en) * 2018-08-16 2019-01-18 广州视源电子科技股份有限公司 A kind of image alignment method and device thereof, computer readable storage medium and computer equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104144298B (en) * 2014-07-16 2017-09-19 浙江宇视科技有限公司 A kind of wide dynamic images synthetic method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104469176A (en) * 2014-12-02 2015-03-25 安徽拓威科技有限公司 Anti-glare device of intelligent toll gate and control method of anti-glare device
CN106303269A (en) * 2015-12-28 2017-01-04 北京智谷睿拓技术服务有限公司 Image acquisition control method and device, image capture device
EP3340165A1 (en) * 2016-12-20 2018-06-27 Thomson Licensing Method of color gamut mapping input colors of an input ldr content into output colors forming an output hdr content
JP2017195645A (en) * 2017-08-07 2017-10-26 日本電気株式会社 Video encoding device, video decoding device, video system, video encoding method, and video encoding program
CN109242811A (en) * 2018-08-16 2019-01-18 广州视源电子科技股份有限公司 A kind of image alignment method and device thereof, computer readable storage medium and computer equipment
CN109166077A (en) * 2018-08-17 2019-01-08 广州视源电子科技股份有限公司 Image alignment method, apparatus, readable storage medium storing program for executing and computer equipment

Also Published As

Publication number Publication date
CN110971841A (en) 2020-04-07

Similar Documents

Publication Publication Date Title
CN110971841B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
US10917571B2 (en) Image capture device control based on determination of blur value of objects in images
CN108335279B (en) Image fusion and HDR imaging
US10284789B2 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
CN110839129A (en) Image processing method and device and mobile terminal
CN110620873B (en) Device imaging method and device, storage medium and electronic device
CN109996009B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109729272B (en) Shooting control method, terminal device and computer readable storage medium
CN107395957B (en) Photographing method and device, storage medium and electronic equipment
CN107623819B (en) A kind of method taken pictures and mobile terminal and related media production
CN110290299B (en) Imaging method, imaging device, storage medium and electronic equipment
CN104754227A (en) Method and device for shooting video
CN112258380A (en) Image processing method, device, equipment and storage medium
CN110035237B (en) Image processing method, image processing device, storage medium and electronic equipment
US10769416B2 (en) Image processing method, electronic device and storage medium
CN110300268A (en) Camera switching method and equipment
CN112367465B (en) Image output method and device and electronic equipment
CN114390197A (en) Shooting method and device, electronic equipment and readable storage medium
CN111192286A (en) Image synthesis method, electronic device and storage medium
CN111225144A (en) Video shooting method and device, electronic equipment and computer storage medium
CN108495038B (en) Image processing method, image processing device, storage medium and electronic equipment
CN113592753A (en) Image processing method and device based on industrial camera shooting and computer equipment
CN108540715B (en) Picture processing method, electronic equipment and computer readable storage medium
CN112367464A (en) Image output method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant