CN110572584A - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN110572584A
CN110572584A CN201910791524.XA CN201910791524A CN110572584A CN 110572584 A CN110572584 A CN 110572584A CN 201910791524 A CN201910791524 A CN 201910791524A CN 110572584 A CN110572584 A CN 110572584A
Authority
CN
China
Prior art keywords
image
images
frames
preview
shot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910791524.XA
Other languages
Chinese (zh)
Other versions
CN110572584B (en
Inventor
邵安宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910791524.XA priority Critical patent/CN110572584B/en
Publication of CN110572584A publication Critical patent/CN110572584A/en
Application granted granted Critical
Publication of CN110572584B publication Critical patent/CN110572584B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

the application discloses an image processing method, an image processing device, a storage medium and an electronic device, wherein the image processing method comprises the following steps: dividing a shooting scene into a long shot scene and a short shot scene; acquiring multiple frames of first images of the shooting scene, wherein the multiple frames of first images have the same exposure parameter; synthesizing the plurality of frames of first images to obtain a first synthesized image of the close shot; acquiring multiple frames of second images of the shooting scene, wherein the multiple frames of second images have different exposure parameters; synthesizing the plurality of frames of second images to obtain a second synthesized image with the high dynamic range of the long-range view; and synthesizing the first synthetic image and the second synthetic image to obtain a target image. According to the image processing scheme provided by the embodiment, the close shot adopts a high-dynamic image synthesis method of multi-frame short exposure, and the far shot adopts a high-dynamic image synthesis method of surrounding exposure, so that the image quality can be improved to a greater extent.

Description

image processing method, image processing device, storage medium and electronic equipment
Technical Field
the present application belongs to the field of image technologies, and in particular, to an image processing method, an image processing apparatus, a storage medium, and an electronic device.
background
with the continuous development of terminals, intelligent terminals have gradually merged into the daily life of people. The user can obtain the image of the shooting scene through the shooting function or the video recording function of the terminal.
In the related technology, a terminal shoots two frames of images of the same scene through two cameras, and the two frames of images are synthesized to obtain a frame of synthesized image. Although the finally obtained frame of composite image can improve the image quality within a certain limit, the requirements of the end user on the image quality cannot be met.
Disclosure of Invention
the embodiment of the application provides an image processing method, an image processing device, a storage medium and an electronic device, wherein the image quality of the processed image can be improved to a greater extent.
In a first aspect, an embodiment of the present application provides an image processing method, including:
Dividing a shooting scene into a long shot scene and a short shot scene;
acquiring multiple frames of first images of the shooting scene, wherein the multiple frames of first images have the same exposure parameter;
Synthesizing the plurality of frames of first images to obtain a first synthesized image with a high dynamic range of the close shot;
acquiring multiple frames of second images of the shooting scene, wherein the multiple frames of second images have different exposure parameters;
synthesizing the plurality of frames of second images to obtain a second synthesized image with the high dynamic range of the long-range view;
and synthesizing the first synthetic image and the second synthetic image to obtain a target image with a high dynamic range.
in a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the dividing module is used for dividing the shooting scene into a long shot and a short shot;
The first acquisition module is used for acquiring multiple frames of first images of the shooting scene, wherein the multiple frames of first images have the same exposure parameter;
The first synthesis module is used for carrying out synthesis processing on the multiple frames of first images to obtain a first synthesis image with a high dynamic range of the close view;
the second acquisition module is used for acquiring multiple frames of second images of the shooting scene, wherein the multiple frames of second images have different exposure parameters;
The second synthesis module is used for synthesizing the plurality of frames of second images to obtain a second synthesis image with the high dynamic range of the long-range view;
And the third synthesis module is used for synthesizing the first synthesis image and the second synthesis image to obtain a target image with a high dynamic range.
in a third aspect, a storage medium is provided in an embodiment of the present application, and has a computer program stored thereon, where the computer program is enabled to execute an image processing method according to any embodiment of the present application when the computer program runs on a computer.
in a fourth aspect, an electronic device provided in an embodiment of the present application includes a processor and a memory, where the memory has a computer program, and the processor is configured to execute the image processing method provided in any embodiment of the present application by calling the computer program.
the image processing method includes the steps of obtaining multiple frames of first images of a shooting scene, wherein the multiple frames of first images have the same exposure parameter, then carrying out synthesis processing on the multiple frames of first images to obtain a first synthesized image with a high dynamic range in a close view, obtaining multiple frames of second images of the shooting scene, wherein the multiple frames of second images have different exposure parameters, then carrying out synthesis processing on the multiple frames of second images to obtain a second synthesized image with a high dynamic range in a far view, and finally carrying out synthesis processing on the first synthesized image and the second synthesized image to obtain a target image. The close shot of the scheme adopts a high dynamic image synthesis method of multi-frame short exposure, and the far shot adopts a high dynamic image synthesis method of surrounding exposure, so that the image quality can be improved to a greater extent.
drawings
the technical solutions and advantages of the present application will become apparent from the following detailed description of specific embodiments of the present application when taken in conjunction with the accompanying drawings.
Fig. 1 is a first flowchart of an image processing method according to an embodiment of the present application.
fig. 2 is a second flowchart of the image processing method according to the embodiment of the present application.
Fig. 3 is a schematic diagram of the position areas of the same object in two buffered images of the shooting scene in the embodiment of the present application.
Fig. 4 is a schematic view of the merging position area of the same object in the embodiment of the present application.
Fig. 5 is a third flowchart of an image processing method according to an embodiment of the present application.
fig. 6 is a fourth flowchart illustrating an image processing method according to an embodiment of the present application.
Fig. 7 is a schematic diagram of a first synthesized image synthesized in the embodiment of the present application.
fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
fig. 9 is a schematic view of a first structure of an electronic device according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of an image processing circuit according to an embodiment of the present application.
Fig. 11 is a second structural schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein. The term "module" as used herein may be considered a software object executing on the computing system. The various modules, engines, and services herein may be considered as objects of implementation on the computing system.
the embodiment of the application provides an image processing method, and an execution main body of the image processing method can be the image processing device provided by the embodiment of the application or an electronic device integrated with the image processing device. The electronic device may be a device having a processing capability and configured with a processor, such as a smart phone, a tablet computer, a Personal Digital Assistant (PDA), and the like.
The following is a detailed description of the analysis.
Referring to fig. 1, fig. 1 is a first flowchart of an image processing method according to an embodiment of the present disclosure. The image processing method is applied to the electronic device provided by the embodiment of the application, and as shown in fig. 1, the image processing method may include the following steps:
101. The shooting scene is divided into a long shot and a short shot.
In the embodiment of the application, the electronic equipment divides a shooting scene into a long shot and a short shot in the shooting preview process. There are many ways to divide the shooting scene into a long shot and a short shot, for example, dividing the shooting scene into a long shot and a short shot according to the distance between each object in the shooting scene and the electronic device. It should be noted that, in the shooting preview process, the electronic device may use one camera to expose the shooting scene, obtain a preview image, and display the preview image in the preview interface. The electronic device can also use a plurality of cameras to expose the shooting scene, obtain a preview image and display the preview image in a preview interface.
in the scheme, the shooting scene refers to a scene to be shot by a user through a camera, and the shooting scene can be presented in a preview image shot through the camera, namely, a scene aimed at by the camera is the shooting scene. For example, if a user uses a camera of the electronic device to align a scene including an XX object, the scene including the XX object is a shooting scene. In addition, the shooting scene in the embodiment of the application is not particularly specific to a specific scene, but is a scene aligned in real time along with the direction of the camera. And the shooting scene of the electronic equipment in the embodiment of the application comprises at least two shooting objects.
the long shot and the short shot can be determined according to the distance between each object in the shooting scene and the electronic equipment, the object with relatively long distance is determined as the long shot, and the object with relatively short distance is determined as the short shot. For example, when a shooting scene of the electronic device includes two objects, the distances between the two objects and the electronic device are compared, and an object having a large distance is determined as a distant scene, and an object having a small distance is determined as a close scene. For another example, when the shooting scene of the electronic device includes three or more objects, an object whose distance is greater than a distance value is determined as a long-range scene and an object whose distance is less than or equal to the distance value is determined as a short-range scene, using one distance value as a criterion.
in addition, the method of the embodiment of the application can be used in any scene needing to be shot by using the camera. For example, shooting with a camera is required in the process of using social software, shooting with a camera is started by clicking an icon of a "camera" application on the electronic device with a finger of a user, and the like.
it can be understood that, during the shooting preview process, the camera continuously acquires the image of the shot scene, and previews and displays the image of the shot scene in the view frame in real time. In the shooting preview process, the shooting scene is divided into a long shot and a short shot in real time.
102. Acquiring a plurality of frames of first images of a shooting scene, wherein the plurality of frames of first images have the same exposure parameter.
103. and synthesizing the plurality of frames of first images to obtain a first synthesized image with a high dynamic range in a close view.
104. And acquiring multiple frames of second images of the shooting scene, wherein the multiple frames of second images have different exposure parameters.
105. And synthesizing the plurality of frames of second images to obtain a second synthesized image with a high dynamic range in the long-range view.
106. and synthesizing the first synthesized image and the second synthesized image to obtain a target image with a high dynamic range.
In the embodiment of the application, when a shooting instruction or a video recording instruction is detected, the electronic device executes the steps 102, 103, 104, 105 and 106. The user can trigger a shooting instruction or a video recording instruction in a preset mode. For example, the shooting instruction is triggered by a touch operation based on the display screen, such as a sliding down operation performed on the display screen by three fingers. Or, the shooting instruction is triggered by a combined key, and the shooting instruction is triggered by pressing a volume key and a screen locking key simultaneously. Or, triggering a shooting instruction by clicking a shooting control on the display screen. Or triggering a shooting instruction through a shooting control in a floating shortcut menu displayed on a display screen. Or, a shooting instruction or the like is triggered by a preset voice instruction.
in the scheme, when a shooting instruction or a video recording instruction is detected, the electronic equipment can expose a shooting scene through one or more cameras according to the same exposure parameter, and further acquire a multi-frame first image. The electronic device can also directly acquire a plurality of frames of first images which are shot by one or more cameras according to the same exposure parameters. Similarly, when a shooting instruction or a video recording instruction is detected, the electronic device may expose the shooting scene according to different exposure parameters through one or more cameras, and further acquire a plurality of frames of second images. The electronic equipment can also directly acquire a plurality of frames of second images which are shot by one or more cameras according to different exposure parameters.
The exposure parameter includes an exposure value (i.e., commonly referred to as an EV value). The multi-frame first images have the same exposure parameter, which may mean that the exposure values of the multi-frame first images are the same, for example, the exposure values of the multi-frame first images are all-2 EV. The plurality of frames of second images have different exposure parameters, which may mean that the exposure values of the plurality of frames of second images are different, for example, the plurality of frames of second images include 3 frames of second images, and the exposure values of the 3 frames of second images are respectively: -1EV, 0EV, 1EV, the exposure values of the 3 second images being different from each other.
In this scheme, the manner of synthesizing the first images of the plurality of frames is different from the manner of synthesizing the second images of the plurality of frames. And the multi-frame first image adopts a multi-frame short exposure high dynamic image synthesis mode to obtain a first synthesized image with a high dynamic range in a close view. And obtaining a second composite image with a high dynamic range in a long-range view by adopting a high dynamic image synthesis mode of surrounding exposure for the multiple frames of second images. Further, the perspective in the first composite image may or may not have a high dynamic range. The close-up view in the second composite image may or may not have a high dynamic range.
wherein the close shot has a high dynamic range of the first composite image, and the close shot in the first composite image can provide more dynamic range and image details than the normal image. The second composite image having a high dynamic range for the perspective, the perspective in the second composite image may provide more dynamic range and image detail than the normal image. The dynamic range of an image refers to the difference in brightness of the image.
In the scheme, after a first synthetic image with a high dynamic range in a close view and a second synthetic image with a high dynamic range in a far view are obtained, the first synthetic image and the second synthetic image are subjected to synthetic processing to obtain a target image with a high dynamic range. Wherein the near portion of the target image may comprise only the near portion of the first composite image and the far portion of the target scene may comprise only the far portion of the second composite image. The near view portion in the target image may also include a near view portion in the first composite image and a near view portion in the second composite image, and the fusion ratio of the near view portion in the first composite image is higher than the fusion ratio of the near view portion in the second composite image. The perspective part in the target image may also include a perspective part in the first composite image and a perspective part in the second composite image, and the fusion ratio of the perspective part in the second composite image is higher than that of the perspective part in the first composite image. It is understood that, when the first synthetic image and the second synthetic image are subjected to the synthetic processing, although the synthetic method of the first synthetic image and the second synthetic image does not belong to any high-dynamic image synthetic method, the obtained target image has more dynamic ranges in both the near view portion and the far view portion than the normal image.
in addition, it should be noted that, in the implementation, the present application is not limited by the execution sequence of the described steps, and some steps may be performed in other sequences or simultaneously without conflict. For example, in some embodiments, the compositing process may be performed on the acquired plurality of frames of the first image simultaneously with the compositing process performed on the acquired plurality of frames of the second image.
as can be seen from the above, in the image processing method provided in this embodiment of the present application, the electronic device divides a shooting scene into a long shot and a short shot during preview, and when a shooting instruction or a video recording instruction is received, a first composite image with a high dynamic range in the short shot of multiple frames is obtained in a high dynamic image synthesis manner, a second composite image with a high dynamic range in the long shot is obtained in a high dynamic image synthesis manner surrounding exposure, and finally, the first composite image and the second composite image are synthesized to obtain a target image. The target image obtained in the scheme has the characteristic of multi-frame short exposure in a close shot, and has the characteristic of surrounding exposure in a far shot, so that the image quality can be improved to a greater extent.
Referring to fig. 2, fig. 2 is a second flow chart of an image processing method according to an embodiment of the present disclosure.
in this embodiment, before "dividing a shooting scene into a long shot and a short shot", the method further includes:
107. the method comprises the steps of obtaining a first preview image of a shooting scene through a first camera, and storing the first preview image to a first image cache queue according to the sequence of exposure of the first camera.
108. and acquiring a second preview image of the shooting scene through the second camera, and storing the second preview image to a second image cache queue according to the sequence of the exposure of the second camera.
In this embodiment, "dividing a shooting scene into a distant view and a close view" includes:
1011. And acquiring two continuous frames of first preview images from the first image buffer queue.
1012. And dividing a shooting scene into a long shot and a short shot according to the two frames of first preview images.
in this embodiment, "acquiring a plurality of frames of the first image of the shooting scene" includes:
1021. and acquiring a plurality of frames of first images of a shooting scene according to the same exposure parameter through the first camera.
In this embodiment, "acquiring a plurality of frames of second images of a shooting scene" includes:
1041. And acquiring multiple frames of second images of the shooting scene through the second camera according to different exposure parameters.
in this embodiment, the electronic device obtains the preview image of the shooting scene through the two cameras, and stores the preview image acquired by the cameras in the image cache queue. The electronic equipment can acquire two frames of preview images continuously shot by the same camera from the image cache queue, and divide a shooting scene into a long shot and a short shot according to the two frames of preview images. The electronic equipment can also acquire each frame of preview image shot by the two cameras from the image cache queue, and divide the shooting scene into a long shot and a short shot according to the acquired preview images. It can be understood that the two cameras can perform long exposure, normal exposure and short exposure when acquiring the preview image. Each camera continuously acquires a preview image of the captured scene, and a short time (e.g., 2 milliseconds) is required for each camera to acquire a preview image of one frame.
according to the scheme, a first image cache queue and a second image cache queue are preset in the electronic equipment, and the first image cache queue and the second image cache queue are respectively used for storing preview images shot by different cameras. For example, the first image buffer queue is used for storing a first preview image shot by a first camera, and the second image buffer queue is used for storing a second preview image shot by a second camera. It is understood that the image buffer queue may be a fixed-length queue or a variable-length queue, for example, the image buffer queue is a fixed-length queue and can buffer 8 frames of preview images.
The electronic equipment comprises a first camera and a second camera. The first camera and the second camera can both be front-facing cameras of the electronic equipment and also can both be rear-facing cameras of the electronic equipment. The arrangement of the first camera and the second camera is various, for example, the first camera is arranged right above the second camera at intervals, for example, the first camera is arranged right left of the second camera at intervals, and the like.
In some embodiments, the first camera may be a standard camera and the second camera may be a tele camera.
in some embodiments, "dividing a shooting scene into a distant view and a close view from two frames of the first preview image" includes:
Identifying the same object in the two frames of first preview images, and acquiring the displacement of each object in the two frames of first preview images;
Determining an object with the displacement greater than or equal to a preset threshold value as a close shot;
and determining the object with the displacement smaller than a preset threshold value as a long shot.
according to the scheme, when the electronic equipment divides a shooting scene into a long shot and a short shot, firstly, two temporally adjacent cache images of the shooting scene are obtained from a first image cache queue and are recorded as a first cache image and a second cache image. The electronic device then identifies the same object present in the first and second cached images, where the same object may be multiple objects. Then, the electronic device acquires a first position of each object in the first cache image, acquires a second position of each object in the second cache image, and calculates the displacement of each object according to the first position and the second position. And finally, determining the object with the displacement larger than or equal to a preset threshold value as a close shot, and determining the object with the displacement smaller than the preset threshold value as a distant shot. For example, taking a same object (object a) as an example, a first position of the object a in the first cache image is obtained, a second position of the object a in the second cache image is obtained, the displacement of the object a is calculated according to the first position and the second position, when the displacement of the object a is greater than or equal to a preset threshold, the object a is determined as a close view, and when the displacement of the object a is less than the preset threshold, the object a is determined as a far view.
The preset threshold is preset in the electronic device, and the preset duration may be fixed or may vary according to a certain rule. In some embodiments, acquiring the displacement of each object in the two frames of the first preview image further comprises: and determining a preset threshold corresponding to each object according to the attribute of each object. For example, when the object is sky, the corresponding preset threshold is smaller; when the object is a person, the corresponding preset threshold value is large, and the like.
it should be noted that, for an object far away from the shooting scene, the displacement of the object in the image is much smaller than the displacement of the object in reality, and for an object near the shooting scene, the displacement of the object in the image is close to the displacement of the object in reality. For example, when a user takes a picture of a picture scene (a long shot: an object B (still); a short shot: an object C (still)) with the electronic device in hand, there is inevitable slight shaking in which the object B and the object C move the same, but the displacement of the object B in the picture taken is smaller than that of the object C.
In some embodiments, obtaining the displacement of each object in the two frames of the first preview image comprises: and obtaining the displacements of a plurality of characteristic points in each object, and taking the average value of the displacements of the characteristic points as the displacement of each object.
In some embodiments, obtaining the displacement of each object in the two frames of the first preview image comprises: and acquiring the displacement of the middle characteristic point of each object, and taking the displacement of the middle characteristic point as the displacement of each object.
In some embodiments, "dividing a shooting scene into a distant view and a close view from two frames of the first preview image" includes:
Identifying the position areas of the same object in the two frames of first preview images to obtain a first position area and a second position area;
Combining the first position area and the second position area to obtain a combined position area of the same object;
and judging whether the area ratio of the merging position area to the first position area or the second position area reaches a preset ratio, if so, determining the same object as a close view, and if not, determining the same object as a long view.
according to the scheme, when the electronic equipment divides a shooting scene into a long shot scene and a short shot scene, firstly, two temporally adjacent cache images of the shooting scene are obtained from a first image cache queue and are recorded as a third cache image and a fourth cache image. Then, the electronic device performs semantic segmentation on the third cache image and the fourth cache image respectively by using a semantic segmentation technology, so as to determine the objects existing in the third cache image and the fourth cache image and the corresponding position areas of the objects. Then, the electronic device identifies the position areas of the same object in the third and fourth cached images according to the semantic segmentation result of the third and fourth cached images, for example, referring to fig. 3, the position area of the object D in the fourth cached image moves to the right by a certain distance compared with the position area in the third cached image. Then, the electronic device merges the position areas of the same object in the third and fourth cached images to obtain a merged position area, as shown in fig. 4. And finally, the electronic equipment judges whether the area ratio of the combined position area to the first position area or the second position area reaches a preset ratio, if so, the same object is determined to be a close view, and if not, the same object is determined to be a long view.
It should be noted that, in this embodiment, the area ratio is equal to the area of the merging position region/the area of the first position region, or the area ratio is equal to the area of the merging position region/the area of the second position region, and as long as one of the calculated values of the area ratios reaches the preset ratio, the object is determined to be in a close range. Furthermore, in some embodiments, the area ratio may also be expressed as: the area ratio is the area of the first position region/the area of the merge position region, or the area ratio is the area of the second position region/the area of the merge position region.
In some embodiments, the area ratio may also be expressed as follows: the area ratio is the area of the merging location region/(the area of the first location region + the area of the second location region). Or the area ratio is 2 ×/(the area of the first position region + the area of the second position region) of the merging position region.
In some embodiments, merging the first location area and the second location area to obtain a merged location area of the same object comprises: and determining first coordinate information of the first position area and second coordinate information of the second position area, and generating a combined position area in the same coordinate system according to the first coordinate information and the second coordinate information.
in the scheme, when a shooting instruction or a video recording instruction is detected, the electronic equipment acquires a plurality of frames of first images of a shooting scene through the first camera according to the same exposure parameter, acquires a plurality of frames of second images of the shooting scene through the second camera according to different exposure parameters, acquires a first composite image with a high dynamic range of a near scene in a high dynamic image synthesis mode of multi-frame short exposure, acquires a second composite image with a high dynamic range of a far scene in a high dynamic image synthesis mode of surrounding exposure, and finally synthesizes the first composite image and the second composite image to obtain a target image.
In some embodiments, 1021 may comprise: and exposing the shooting scene through the first camera according to the preset short exposure time length to obtain a multi-frame first image.
In some embodiments, 1022 may include: and respectively exposing the shooting scene through the second camera according to the preset over-exposure value and the preset under-exposure value to obtain a plurality of frames of second images.
In some embodiments, 1022 may include: and respectively exposing the shooting scene through the second camera according to the preset over-exposure value, the preset normal exposure value and the preset under-exposure value to obtain a multi-frame second image.
In some embodiments, the electronic device further includes a third camera, 1022 may perform the following: and acquiring multiple frames of second images of the shooting scene through the second camera and the third camera according to different exposure parameters. For example, the second camera exposes the shooting scene with an exposure value of-3 EV, and the third camera exposes the shooting scene with an exposure value of 3 EV.
Referring to fig. 5, fig. 5 is a third flow chart of the image processing method according to the embodiment of the present disclosure.
In some embodiments, "dividing the shooting scene into a distant view and a close view" includes:
1013. and acquiring a frame of first preview image from the first image cache queue, and acquiring a frame of second preview image from the second image cache queue.
1014. and dividing the shooting scene into a long shot and a short shot according to the first preview image and the second preview image.
In this embodiment, "acquiring a plurality of frames of the first image of the shooting scene" includes:
1022. And acquiring a plurality of frames of first preview images from the first image cache queue, and taking the plurality of frames of first preview images as a plurality of frames of first images.
1042. And acquiring a frame of first preview image from the first image cache queue, acquiring a frame of second preview image from the second image cache queue, and taking the frame of first preview image and the frame of second preview image as a plurality of frames of second images.
According to the scheme, a first preview image is obtained through a first camera and stored in a first image cache queue, a second preview image is obtained through a second camera and stored in a second image cache queue, an electronic device divides a shooting scene into a long shot scene and a short shot scene according to two continuous frames of the first preview image, when a shooting instruction or a video recording instruction is detected, a plurality of frames of the first preview image are directly obtained from the first image cache queue, the plurality of frames of the first preview image are used as a plurality of frames of the first image, one frame of the first preview image is directly obtained from the first image cache queue, one frame of the second preview image is directly obtained from the second image cache queue, the one frame of the first preview image and the one frame of the second preview image are used as a plurality of frames of the second image, and therefore the operation time of shooting or video recording is shortened, and the image processing efficiency is improved.
in some embodiments, 1022 may include: and acquiring a plurality of frames of second preview images from the second image cache queue, and taking the plurality of frames of second preview images as a plurality of frames of first images.
in some embodiments, 1042 may include: and acquiring a plurality of frames of first preview images from the first image cache queue, acquiring a plurality of frames of second preview images from the second image cache queue, and taking the plurality of frames of first preview images and the plurality of frames of second preview images as a plurality of frames of second images. The number of frames of the first preview image in the multi-frame second image may be the same as the number of frames of the second preview image, and the number of frames of the first preview image in the multi-frame second image may be different from the number of frames of the second preview image. For example, the multi-frame second image includes 3 frames of first preview images and 3 frames of second preview images, at this time, when the multi-frame second images are subjected to synthesis processing, the electronic device first performs multi-frame noise reduction processing on the 3 frames of first preview images to obtain a frame of noise reduction image a, performs multi-frame noise reduction processing on the 3 frames of second preview images to obtain a frame of noise reduction image B, and then performs high-dynamic image synthesis processing on the noise reduction image a and the noise reduction image B to obtain a target image with a high dynamic range.
in some embodiments, after execution 1014, the electronic device may execute 1021, 103, 1041, 105, and 106.
In some embodiments, the electronic device may execute 1012 before executing 1022, 103, 1042, 105, and 106.
in some embodiments, "dividing a shooting scene into a distant view and a near view according to one frame of the first preview image and one frame of the second preview image" includes:
Acquiring depth information of an object in a shooting scene according to one frame of first preview image and one frame of second preview image, wherein the depth information comprises a shooting distance;
Determining an object with a shooting distance greater than or equal to a preset distance as a close shot;
and determining the object with the shooting distance smaller than the preset distance as a long shot.
According to the scheme, when the electronic equipment divides a shooting scene into a long-range view and a short-range view, firstly, a frame of cache image of the shooting scene is obtained from a first image cache queue and is recorded as a fifth cache image, and a frame of cache image of the shooting scene is obtained from a second image cache queue and is recorded as a sixth cache image. Then, the electronic device obtains depth information of the same object in the fifth cache image and the sixth cache image through a stereo matching algorithm, wherein the depth information includes a shooting distance, and the shooting distance refers to the distance from the object to the electronic device. And determining the object with the shooting distance greater than or equal to the preset distance as a close shot, and determining the object with the shooting distance less than the preset distance as a long shot. The preset distance is preset in the electronic device, and the preset distance can be fixed and can also be changed according to a certain rule. The user can change the preset distance of the electronic equipment.
in some embodiments, the electronic device may transmit continuous near-infrared pulses to a shooting scene, receive, by the sensor, light pulses transmitted back by objects in the shooting scene, obtain distances between the objects and the electronic device according to the transmitted light pulses and the received light pulses, determine an object whose distance is greater than or equal to a preset distance as a near scene, and determine an object whose distance is less than the preset distance as a far scene.
As can be seen from the above, in the image processing method provided in this embodiment of the present application, when a shooting instruction or a video recording instruction is detected, the electronic device directly obtains multiple frames of first images and multiple frames of second images from the image cache queue, where the multiple frames of first images have the same exposure parameters, and the multiple frames of second images have different exposure parameters, then synthesizes the multiple frames of first images to obtain a first synthesized image with a high dynamic range in a close view, synthesizes the multiple frames of second images to obtain a second synthesized image with a high dynamic range in a far view, and finally synthesizes the first synthesized image and the second synthesized image to obtain a target image with a high dynamic range. According to the scheme, when the shooting instruction or the video recording instruction is detected, the multi-frame first image and the multi-frame second image are directly acquired from the image cache queue, so that the operation time of shooting or video recording can be shortened, and the image processing efficiency is improved.
Referring to fig. 6, fig. 6 is a fourth flowchart illustrating an image processing method according to an embodiment of the present disclosure.
in some embodiments, the "synthesizing the plurality of frames of the first image to obtain the first synthesized image with the high dynamic range in the close view" includes:
1031. And performing mask processing on the long shot in the first images of the multiple frames.
1032. And performing high-dynamic image synthesis processing on the multi-frame first image subjected to mask processing to obtain a first synthesized image with a high dynamic range in a close view.
According to the scheme, after multiple frames of first images with the same exposure parameters are obtained, masking processing is respectively carried out on the long shot of each frame of first image in the multiple frames of first images, high-dynamic image synthesis processing is carried out on the multiple frames of first images after masking processing, namely high-dynamic image synthesis processing is carried out on the short shot part in the multiple frames of first images, and a first synthesis image with a high dynamic range in the short shot is obtained.
For example, assume that the electronic apparatus acquires 4 frames of a first image of a shooting scene (far view: mountain; near view: person), which are a first scene image A (-1EV exposure value), a first scene image B (-1 exposure value), a first scene image C (-1EV exposure value), and a first scene image D (-1EV exposure value), respectively. Referring to fig. 7, the electronic device first masks the mountains in the first scene image a, the first scene image B, the first scene image C, and the first scene image D, then selects one of the masked images as a reference image, and assumes that the masked first scene image a is selected as the reference image, the electronic device aligns the close views of the masked first scene image B, the masked first scene image C, and the masked first scene image D with the close view of the first scene image a, and then calculates an average pixel value of each pixel in the close view portion based on each image after the close view alignment (for example, assuming that pixel values of a pixel at a certain position in four first scene images are respectively "0.8, 0.9, 1.1, 1.2", then calculates an average pixel value of a pixel at the certain position to be "1"). And then, obtaining a frame of first composite image according to the average pixel value of each position of the close shot. The electronic device may correspondingly adjust the pixel value of each pixel point in the near view in the reference image (i.e., the first scene image a) to the calculated average pixel value of the near view, and the pixel value of each pixel point in the far view remains unchanged, so as to obtain a frame of the first composite image. The electronic device may further generate a new image according to the calculated average pixel values of the near view and the pixel values of the pixels of the far view in the reference image (i.e., the first scene image a), and use the generated image as a first composite image. For the first composite image, the brightness of the electronic device is further improved, and the first composite image with high dynamic close range is obtained.
It should be noted that, in the embodiment of the present application, the first composite image includes both a near view portion with a high dynamic range and a distant view portion without a high dynamic range. The masking processing is mainly used for shielding the long-range view part in the multi-frame first image, so that the long-range view part in the multi-frame first image does not participate in the high-dynamic image synthesis processing.
in some embodiments, after the multiple frames of first images are acquired, a close shot in the multiple frames of first images may be masked, and the masked multiple frames of first images may be subjected to high-dynamic image synthesis processing to obtain a first synthesized image with a high dynamic range in the close shot.
It should be noted that the first composite image in this scheme also includes both a near view portion with a high dynamic range and a far view portion without a high dynamic range. In addition, the masking processing in the scheme is mainly used for taking a close-range part in the first images of the multiple frames as the interested area, so that the high-dynamic image synthesis processing is only carried out on the interested area in the first images of the multiple frames.
In some embodiments, after acquiring the plurality of frames of first images, the high dynamic range synthesis processing may be directly performed on the plurality of frames of first images, so as to obtain a first synthesized image with a high dynamic range in a close view.
in some embodiments, the "synthesizing the plurality of frames of the second image to obtain the second synthesized image with the high dynamic range in the long-range view" includes:
1051. and performing mask processing on the close shot in the second images of the plurality of frames.
1052. And performing high-dynamic image synthesis processing on the multi-frame second image subjected to mask processing to obtain a second synthetic image with a high dynamic range in a long-range view.
in the scheme, after multiple frames of second images with different exposure parameters are obtained, masking is respectively carried out on a close shot of each frame of second image in the multiple frames of second images, and high-dynamic image synthesis processing is carried out on the multiple frames of second images after masking, namely, high-dynamic image synthesis processing is carried out on a long shot part in the multiple frames of second images, so that a second synthetic image with a long-shot high-dynamic range is obtained.
for example, it is assumed that the electronic device acquires 3-frame second images of a shooting scene (distant view: mountain; close view: person), which are a first scene image E (-3EV exposure value), a first scene image F (0EV exposure value), and a first scene image G (3EV exposure value), respectively. The electronic equipment firstly performs masking processing on the first scene image E, the first scene image F and people in the first scene image G, and then performs high-dynamic image synthesis processing on the masked first scene image E, the masked first scene image F and mountains of the first scene image G to obtain a second synthetic image with a high dynamic range in a long-range view. The long shot in the second composite image retains the characteristics of the brighter region in the long shot of the first scene image E, retains the normally bright characteristics in the long shot of the first scene image F, and retains the characteristics of the darker region in the long shot of the first scene image G.
It should be noted that, in the embodiment of the present application, the second composite image includes both the distant view portion with the high dynamic range and the near view portion without the high dynamic range. The mask processing is mainly used for shielding the close shot part in the second images of the plurality of frames, so that the close shot part in the second images of the plurality of frames does not participate in the high-dynamic image synthesis processing.
In some embodiments, after the multiple frames of second images are acquired, the multiple frames of second images may be directly subjected to high dynamic range synthesis processing, so as to obtain a second synthesized image with a high dynamic range in a long-range view.
in some embodiments, "synthesizing the first synthesized image and the second synthesized image to obtain the target image with a high dynamic range" includes:
1061. And carrying out alignment processing on the first synthetic image and the second synthetic image.
1062. and acquiring a first distant view weight value and a first near view weight value of the first synthetic image, and acquiring a second distant view weight value and a second near view weight value of the second synthetic image, wherein the first near view weight value is greater than the second near view weight value, and the first distant view weight value is less than the second distant view weight value.
1063. and synthesizing the aligned first synthetic image and the aligned second synthetic image according to the first distant view weight value, the first close view weight value, the second distant view weight value and the second close view weight value to obtain a target image with a high dynamic range.
In the scheme, after a first synthetic image with a high dynamic range in a near view and a second synthetic image with a high dynamic range in a far view are obtained, the electronic device aligns the first synthetic image and the second synthetic image, obtains a far view weight value and a near view weight value of the first synthetic image and obtains a far view weight value and a near view weight value of the second synthetic image, and calculates an average pixel value of each pixel point based on each aligned image and the obtained far view weight value and near view weight value (for example, if a pixel point at a certain position in the near view is respectively 2.1 and 0.9 in the first synthetic image and the second synthetic image, the average pixel value of the pixel point at the certain position in the near view can be calculated to be 2.1 +0.9 + 0.1 to 1.98, and if the pixel point at the certain position in the far view is respectively 1.1 in the first synthetic image and the second synthetic image, 2.5 ″, an average pixel value of the pixel points at the position is calculated to be "1.1 × 0.2+2.5 × 0.8 ═ 2.22"), and then a target image with a high dynamic range is obtained according to the average pixel value at each position. The first distant view weight value, the first near view weight value, the second distant view weight value and the second near view weight value are preset in the electronic device, and can be defined by a user or automatically distributed by the electronic device.
It should be noted that the average pixel value of the pixel point of the distant view part depends on the first distant view weight value and the second distant view weight value, and may be represented as:
P=k1 p1+k2 p2
Wherein, P represents the average pixel value of distant view pixel points in the target image, and P1representing the pixel value, p, of the distant view pixel in the first composite image2Representing the pixel value, k, of the distant view pixel in the second composite image1a first perspective weight value, k, representing a first composite image2Representing a second perspective weight value for the second composite image.
The average pixel value of the pixel points of the close-range portion depends on the first close-range weight value and the second close-range weight value, and may be represented as:
P=k3 p3+k4 p4
wherein, P represents the average pixel value of the close-range pixel points in the target image, and P3Representing the pixel value, p, of the close-range pixel in the first composite image4Representing the pixel value, k, of the close-range pixel in the second composite image3a first close-range weight value, k, representing a first composite image4A second near view weight value representing a second composite image.
In addition, in the embodiment of the present application, the distant view weight value of the first composite image is much smaller than the distant view weight value of the second composite image, and the close view weight value of the first composite image is much larger than the close view weight value of the second composite image. Because the distant view weight value of the first synthetic image is far smaller than that of the second synthetic image, the distant view effect of the target image is closer to the second synthetic image, and because the near view weight value of the first synthetic image is far larger than that of the second synthetic image, the near view effect of the target image is closer to the first synthetic image. In summary, for the target image, the close view exhibits the characteristic of multi-frame short exposure, and the far view exhibits the characteristic of bracketing exposure.
in some embodiments, the sum of the distance view weight value of the first composite image and the distance view weight value of the second composite image is equal to one, and the sum of the near view weight value of the first composite image and the near view weight value of the second composite image is equal to one.
In some embodiments, "synthesizing the first synthesized image and the second synthesized image to obtain the target image with a high dynamic range" includes:
Acquiring a close-up region of the first composite image and acquiring a far-up region of the second composite image,
and synthesizing the close-range region and the distant-range region to obtain a target image with a high dynamic range.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. Specifically, the image processing apparatus 200 includes: a dividing module 201, a first obtaining module 202, a first synthesizing module 203, a second obtaining module 204, a second synthesizing module 205 and a third synthesizing module 206.
A dividing module 201, configured to divide a shooting scene into a long shot and a short shot;
A first obtaining module 202, configured to obtain multiple frames of first images of the shooting scene, where the multiple frames of first images have the same exposure parameter;
The first synthesizing module 203 is configured to perform synthesizing processing on the multiple frames of first images to obtain a first synthesized image of the close-range scene with a high dynamic range;
A second obtaining module 204, configured to obtain multiple frames of second images of the shooting scene, where the multiple frames of second images have different exposure parameters;
A second synthesis module 205, configured to perform synthesis processing on the multiple frames of second images to obtain a second synthesized image with a high dynamic range in the long-range view;
and a third synthesis module 206, configured to perform synthesis processing on the first synthesized image and the second synthesized image to obtain a target image with a high dynamic range.
In some embodiments, the image processing apparatus 200 further comprises a first storage module and a second storage module:
The first storage module is used for acquiring a first preview image of the shooting scene through a first camera and storing the first preview image to a first image cache queue according to the sequence of exposure of the first camera;
The second storage module is used for acquiring a second preview image of the shooting scene through a second camera and storing the second preview image to a second image cache queue according to the sequence of exposure of the second camera;
The dividing module 201 is further configured to obtain two consecutive frames of first preview images from the first image buffer queue; dividing a shooting scene into a long shot and a short shot according to the two frames of first preview images;
or the dividing module 201 is further configured to obtain a frame of first preview image of the shooting scene from the first image cache queue, and obtain a frame of second preview image of the shooting scene from the second image cache queue; dividing a shooting scene into a long shot and a short shot according to the first preview image and the second preview image;
the first obtaining module 202 is further configured to obtain multiple frames of first preview images from the first image cache queue, and use the multiple frames of first preview images as multiple frames of first images;
The second obtaining module 204 is further configured to obtain a frame of the first preview image from the first image cache queue, obtain a frame of the second preview image from the second image cache queue, and use the frame of the first preview image and the frame of the second preview image as multiple frames of the second image.
in some embodiments, the dividing module 201 is specifically configured to obtain two consecutive frames of first preview images from the first image buffer queue; identifying the same object in the two frames of first preview images, and acquiring the displacement of each object in the two frames of first preview images; determining the object with the displacement greater than or equal to a preset threshold value as a close shot; and determining the object with the displacement smaller than a preset threshold value as a long shot.
In some embodiments, the dividing module 201 is specifically configured to obtain two consecutive frames of first preview images from the first image buffer queue; identifying the position areas of the same object in the two frames of first preview images to obtain a first position area and a second position area; combining the first position area and the second position area to obtain a combined position area of the same object; and judging whether the area ratio of the merging position area to the first position area or the second position area reaches a preset ratio, if so, determining the same object as a close view, and if not, determining the same object as a long view.
in some embodiments, the dividing module 201 is specifically configured to obtain a frame of first preview image of the shooting scene from the first image cache queue, and obtain a frame of second preview image of the shooting scene from the second image cache queue; acquiring depth information of an object in a shooting scene according to the first preview image and the second preview image, wherein the depth information comprises a shooting distance; determining the object with the shooting distance greater than or equal to a preset distance as a close shot; and determining the object with the shooting distance smaller than the preset distance as a long shot.
In some embodiments, the first obtaining module 202 is further configured to obtain, through the first camera, multiple frames of first images of the shooting scene according to the same exposure parameter; the second obtaining module 204 is further configured to obtain, through the second camera, multiple frames of second images of the shooting scene according to different exposure parameters.
In some embodiments, the first storage module is further configured to obtain a first preview image of the shooting scene through a first camera according to the same exposure parameter, and store the first preview image to a first image cache queue according to the sequence of exposure of the first camera;
The second storage module is further configured to obtain a second preview image of the shooting scene through a second camera according to different exposure parameters, and store the second preview image to a second image cache queue according to the sequence of exposure of the second camera;
the first obtaining module 202 is further configured to obtain multiple frames of first preview images of the shooting scene from the first image cache queue, and use the multiple frames of first preview images as multiple frames of first images;
The second obtaining module 204 is further configured to obtain multiple frames of second preview images of the shooting scene from the second image cache queue, and use the multiple frames of second preview images as multiple frames of second images.
In some embodiments, the first synthesizing module 203 is further configured to mask the perspective in the plurality of frames of the first image;
And performing high-dynamic image synthesis processing on the multi-frame first image subjected to mask processing to obtain a first synthesized image with a high dynamic range in the close range.
in some embodiments, the second synthesizing module 205 is further configured to mask the close shot in the second images of the plurality of frames;
And performing high-dynamic image synthesis processing on the plurality of frames of second images after mask processing to obtain a second synthetic image with the high dynamic range of the long-range view.
In some embodiments, the third synthesis module 206 is further configured to perform an alignment process on the first synthetic image and the second synthetic image;
Acquiring a first distant view weight value and a first near view weight value of the first synthetic image, and acquiring a second distant view weight value and a second near view weight value of the second synthetic image, wherein the first near view weight value is greater than the second near view weight value, and the first distant view weight value is less than the second distant view weight value;
and synthesizing the aligned first synthetic image and the aligned second synthetic image according to the first distant view weight value, the first close view weight value, the second distant view weight value and the second close view weight value to obtain a target image with a high dynamic range.
As can be seen from the above, in the image processing apparatus 200 provided in this embodiment of the application, the dividing module 201 divides the shooting scene into a long shot and a short shot, then the first obtaining module 202 obtains multiple frames of first images of the shooting scene, where the multiple frames of first images have the same exposure parameter, the first synthesizing module 203 performs synthesizing processing on the multiple frames of first images to obtain a first synthesized image of the short shot with a high dynamic range, the second obtaining module 204 obtains multiple frames of second images of the shooting scene, where the multiple frames of second images have different exposure parameters, the second synthesizing module 205 performs synthesizing processing on the multiple frames of second images to obtain a second synthesized image of the long shot with a high dynamic range, and finally the third synthesizing module performs synthesizing processing on the first synthesized image and the second synthesized image to obtain a target image with a high dynamic range. According to the image processing scheme provided by the embodiment, the close shot adopts a high-dynamic image synthesis method of multi-frame short exposure, and the far shot adopts a high-dynamic image synthesis method of surrounding exposure, so that the image quality can be improved to a greater extent.
Fig. 9 shows a first structural diagram of an electronic device according to an embodiment of the present application, where fig. 9 is a schematic diagram of the electronic device according to the present application. The electronic device 300 includes a processor 301, a memory 302, and a camera assembly 303. The processor 301 is electrically connected to the memory 302 and the camera module 303. The camera assembly 303 includes a first camera and a second camera.
In some embodiments, the first camera is a standard camera and the second camera is a tele camera.
The processor 300 is a control center of the electronic device 300, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device 300 by running or loading a computer program stored in the memory 302, and calls data stored in the memory 302, and processes the data, thereby monitoring the electronic device 300 as a whole.
the memory 302 may be used to store software programs and modules, and the processor 301 executes various functional applications and data processing by running the computer programs and modules stored in the memory 302. The memory 302 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, a computer program required for at least one function, and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 302 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 302 may also include a memory controller to provide the processor 301 with access to the memory 302.
The camera assembly 303 may include Image Processing circuitry, which may be implemented using hardware and/or software components, and may include various Processing units that define an Image Signal Processing (Image Signal Processing) pipeline. The image processing circuit may include at least: a plurality of cameras, an Image Signal Processor (ISP Processor), control logic, an Image memory, and a display. Where each camera may include at least one or more lenses and an image sensor. The image sensor may include an array of color filters (e.g., Bayer filters). The image sensor may acquire light intensity and wavelength information captured with each imaging pixel of the image sensor and provide a set of raw image data that may be processed by an image signal processor.
the image signal processor may process the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the image signal processor may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision. The raw image data can be stored in an image memory after being processed by an image signal processor. The image signal processor may also receive image data from an image memory.
the image Memory may be part of a Memory device, a storage device, or a separate dedicated Memory within the electronic device, and may include a DMA (Direct Memory Access) feature.
When image data is received from the image memory, the image signal processor may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to an image memory for additional processing before being displayed. The image signal processor may also receive processed data from the image memory and perform image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the image signal processor may also be sent to an image memory, and the display may read image data from the image memory. In one embodiment, the image memory may be configured to implement one or more frame buffers.
The statistical data determined by the image signal processor may be sent to the control logic. For example, the statistical data may include statistical information of the image sensor such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens shading correction, and the like.
the control logic may include a processor and/or microcontroller that executes one or more routines (e.g., firmware). One or more routines may determine camera control parameters and ISP control parameters based on the received statistics. For example, the control parameters of the camera may include camera flash control parameters, control parameters of the lens (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), etc.
referring to fig. 10, fig. 10 is a schematic structural diagram of the image processing circuit in the present embodiment. As shown in fig. 10, only aspects of the image processing technique related to the embodiment of the present invention are shown for convenience of explanation.
For example, the image processing circuitry may include: camera, image signal processor, control logic ware, image memory, display. The camera may include one or more lenses and an image sensor, among others. In some embodiments, the camera may be either a tele camera or a wide camera.
And the first image collected by the camera is transmitted to an image signal processor for processing. After the image signal processor processes the first image, statistical data of the first image (e.g., brightness of the image, contrast value of the image, color of the image, etc.) may be sent to the control logic. The control logic device can determine the control parameters of the camera according to the statistical data, so that the camera can carry out operations such as automatic focusing and automatic exposure according to the control parameters. The first image can be stored in the image memory after being processed by the image signal processor. The image signal processor may also read the image stored in the image memory for processing. In addition, the first image can be directly sent to the display for displaying after being processed by the image signal processor. The display may also read the image in the image memory for display.
In addition, not shown in the figure, the electronic device may further include a CPU and a power supply module. The CPU is connected with the logic controller, the image signal processor, the image memory and the display, and is used for realizing global control. The power supply module is used for supplying power to each module.
In this embodiment, the processor 301 in the electronic device 300 loads instructions corresponding to one or more processes of the computer program into the memory 302 according to the following steps, and the processor 301 runs the computer program stored in the memory 302, so as to implement various functions, as follows:
Dividing a shooting scene into a long shot scene and a short shot scene;
acquiring multiple frames of first images of the shooting scene, wherein the multiple frames of first images have the same exposure parameter;
Synthesizing the plurality of frames of first images to obtain a first synthesized image with a high dynamic range of the close shot;
acquiring multiple frames of second images of the shooting scene, wherein the multiple frames of second images have different exposure parameters;
synthesizing the plurality of frames of second images to obtain a second synthesized image with the high dynamic range of the long-range view;
and synthesizing the first synthetic image and the second synthetic image to obtain a target image with a high dynamic range.
In some embodiments, before dividing the shooting scene into a long shot and a short shot, the processor 301 may perform:
acquiring a first preview image of the shooting scene through a first camera, and storing the first preview image to a first image cache queue according to the sequence of exposure of the first camera;
acquiring a second preview image of the shooting scene through a second camera, and storing the second preview image to a second image cache queue according to the sequence of exposure of the second camera;
when dividing the shooting scene into a long shot and a short shot, the processor 301 may perform:
Acquiring two continuous frames of first preview images from the first image cache queue;
dividing a shooting scene into a long shot and a short shot according to the two frames of first preview images; or
acquiring a frame of first preview image from the first image cache queue, and acquiring a frame of second preview image from the second image cache queue;
dividing a shooting scene into a long shot and a short shot according to the first preview image and the second preview image;
When acquiring multiple frames of first images of the shooting scene, the processor 301 may perform:
acquiring a plurality of frames of first preview images from the first image cache queue, and taking the plurality of frames of first preview images as a plurality of frames of first images;
When acquiring multiple frames of second images of the shooting scene, the processor 301 may perform:
and acquiring a frame of first preview image from the first image cache queue, acquiring a frame of second preview image from the second image cache queue, and taking the frame of first preview image and the frame of second preview image as a plurality of frames of second images.
In some embodiments, when dividing the shooting scene into a long shot and a short shot according to the two frames of the first preview image, the processor 301 may perform:
Identifying the same object in the two frames of first preview images, and acquiring the displacement of each object in the two frames of first preview images;
determining the object with the displacement greater than or equal to a preset threshold value as a close shot;
and determining the object with the displacement smaller than a preset threshold value as a long shot.
In some embodiments, when dividing the shooting scene into a long shot and a short shot according to the two frames of the first preview image, the processor 301 may perform:
Identifying the position areas of the same object in the two frames of first preview images to obtain a first position area and a second position area;
Combining the first position area and the second position area to obtain a combined position area of the same object;
And judging whether the area ratio of the merging position area to the first position area or the second position area reaches a preset ratio, if so, determining the same object as a close view, and if not, determining the same object as a long view.
In some embodiments, when dividing the shooting scene into a long-range view and a short-range view according to the first preview image of the frame and the second preview image of the frame, the processor 301 may perform:
Acquiring depth information of an object in a shooting scene according to the first preview image and the second preview image, wherein the depth information comprises a shooting distance;
Determining the object with the shooting distance greater than or equal to a preset distance as a close shot;
And determining the object with the shooting distance smaller than the preset distance as a long shot.
in some embodiments, when acquiring the first images of the plurality of frames of the shooting scene, the processor 301 may perform:
Acquiring a plurality of frames of first images of the shooting scene according to the same exposure parameter through the first camera;
When acquiring multiple frames of second images of the shooting scene, the processor 301 may perform:
And acquiring multiple frames of second images of the shooting scene according to different exposure parameters through the second camera.
in some embodiments, when the multiple frames of first images are subjected to synthesis processing to obtain a first synthesized image with a high dynamic range in the close range, the processor 301 may perform:
Masking the long shot in the multiple frames of first images;
and performing high-dynamic image synthesis processing on the multi-frame first image subjected to mask processing to obtain a first synthesized image with a high dynamic range in the close range.
In some embodiments, when the multiple frames of second images are subjected to synthesis processing to obtain a second synthesized image with a high dynamic range in the distant view, the processor 301 may perform:
masking the close shot in the plurality of frames of second images;
And performing high-dynamic image synthesis processing on the plurality of frames of second images after mask processing to obtain a second synthetic image with the high dynamic range of the long-range view.
In some embodiments, when the first synthesized image and the second synthesized image are synthesized to obtain the target image with a high dynamic range, the processor 301 may perform:
Aligning the first composite image and the second composite image;
Acquiring a first distant view weight value and a first near view weight value of the first synthetic image, and acquiring a second distant view weight value and a second near view weight value of the second synthetic image, wherein the first near view weight value is greater than the second near view weight value, and the first distant view weight value is less than the second distant view weight value;
And synthesizing the aligned first synthetic image and the aligned second synthetic image according to the first distant view weight value, the first close view weight value, the second distant view weight value and the second close view weight value to obtain a target image with a high dynamic range.
Referring to fig. 11, fig. 11 is a second schematic structural diagram of an electronic device according to an embodiment of the disclosure. In some embodiments, the electronic device 300 may further include: a display 304, a radio frequency circuit 305, an audio circuit 306, and a power supply 307. The display 304, the rf circuit 305, the audio circuit 306 and the power source 307 are electrically connected to the processor 301 respectively.
the display 304 may be used to display information entered by or provided to the user as well as various graphical user interfaces that may be composed of graphics, text, icons, video, and any combination thereof.
The rf circuit 305 may be used for transceiving rf signals to establish wireless communication with a network device or other electronic devices through wireless communication, and for transceiving signals with the network device or other electronic devices.
Audio circuitry 306 may be used to provide an audio interface between the user and the electronic device through speakers, microphones.
the power supply 307 may be used to power various components of the electronic device 300. In some embodiments, the power supply 307 may be logically coupled to the processor 301 through a power management system, such that functions of managing charging, discharging, and power consumption are performed through the power management system.
Although not shown in fig. 11, the electronic device 300 may further include a bluetooth module or the like, which is not described herein.
as can be seen from the above, in the electronic device provided in this embodiment, a shooting scene is divided into a long-range view and a short-range view during preview, when a shooting instruction or a video recording instruction is received, a first composite image with a high dynamic range in the short-range view is obtained by using a high dynamic image synthesis manner of multiple frames of short exposures, a second composite image with a high dynamic range in the long-range view is obtained by using a high dynamic image synthesis manner of surrounding exposures, and finally, the first composite image and the second composite image are synthesized to obtain a target image. The target image obtained in the scheme has the characteristic of multi-frame short exposure in a close shot, and has the characteristic of surrounding exposure in a far shot, so that the image quality can be improved to a greater extent.
An embodiment of the present application further provides a storage medium, where the storage medium stores a computer program, and when the computer program runs on a computer, the computer program causes the computer to execute the image processing method in any one of the above embodiments, such as: dividing a shooting scene into a long shot scene and a short shot scene; acquiring multiple frames of first images of the shooting scene, wherein the multiple frames of first images have the same exposure parameter; synthesizing the plurality of frames of first images to obtain a first synthesized image with a high dynamic range of the close shot; acquiring multiple frames of second images of the shooting scene, wherein the multiple frames of second images have different exposure parameters; synthesizing the plurality of frames of second images to obtain a second synthesized image with the high dynamic range of the long-range view; and synthesizing the first synthetic image and the second synthetic image to obtain a target image with a high dynamic range.
In the embodiment of the present application, the storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should be noted that, for the image processing method of the embodiment of the present application, it can be understood by a person skilled in the art that all or part of the process of implementing the image processing method of the embodiment of the present application can be completed by controlling the relevant hardware through a computer program, where the computer program can be stored in a computer readable storage medium, such as a memory of an electronic device, and executed by at least one processor in the electronic device, and during the execution process, the process of the embodiment of the image processing method can be included. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random access memory, etc.
In the image processing apparatus according to the embodiment of the present application, each functional module may be integrated into one processing chip, each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented as a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium such as a read-only memory, a magnetic or optical disk, or the like.
The foregoing detailed description has provided an image processing method, an image processing apparatus, a storage medium, and an electronic device according to embodiments of the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (14)

1. An image processing method, comprising:
Dividing a shooting scene into a long shot scene and a short shot scene;
Acquiring multiple frames of first images of the shooting scene, wherein the multiple frames of first images have the same exposure parameter;
Synthesizing the plurality of frames of first images to obtain a first synthesized image with a high dynamic range of the close shot;
acquiring multiple frames of second images of the shooting scene, wherein the multiple frames of second images have different exposure parameters;
Synthesizing the plurality of frames of second images to obtain a second synthesized image with the high dynamic range of the long-range view;
And synthesizing the first synthetic image and the second synthetic image to obtain a target image with a high dynamic range.
2. The image processing method according to claim 1, wherein before the dividing the shooting scene into the long shot and the short shot, further comprising:
Acquiring a first preview image of the shooting scene through a first camera, and storing the first preview image to a first image cache queue according to the sequence of exposure of the first camera;
Acquiring a second preview image of the shooting scene through a second camera, and storing the second preview image to a second image cache queue according to the sequence of exposure of the second camera;
the dividing of the shooting scene into a long shot and a short shot comprises:
acquiring two continuous frames of first preview images from the first image cache queue;
Dividing a shooting scene into a long shot and a short shot according to the two frames of first preview images; or
Acquiring a frame of first preview image from the first image cache queue, and acquiring a frame of second preview image from the second image cache queue;
And dividing a shooting scene into a long shot and a short shot according to the first preview image and the second preview image.
3. The image processing method according to claim 2, wherein the dividing of the shooting scene into a long-range view and a short-range view according to the two-frame first preview image comprises:
identifying the same object in the two frames of first preview images, and acquiring the displacement of each object in the two frames of first preview images;
Determining the object with the displacement greater than or equal to a preset threshold value as a close shot;
And determining the object with the displacement smaller than a preset threshold value as a long shot.
4. the image processing method according to claim 2, wherein the dividing of the shooting scene into a long-range view and a short-range view according to the two-frame first preview image comprises:
identifying the position areas of the same object in the two frames of first preview images to obtain a first position area and a second position area;
combining the first position area and the second position area to obtain a combined position area of the same object;
And judging whether the area ratio of the merging position area to the first position area or the second position area reaches a preset ratio, if so, determining the same object as a close view, and if not, determining the same object as a long view.
5. the image processing method according to claim 2, wherein the dividing of the shooting scene into a long shot and a short shot according to the one frame first preview image and the one frame second preview image comprises:
Acquiring depth information of an object in a shooting scene according to the first preview image and the second preview image, wherein the depth information comprises a shooting distance;
Determining the object with the shooting distance greater than or equal to a preset distance as a close shot;
And determining the object with the shooting distance smaller than the preset distance as a long shot.
6. The image processing method according to claim 2, wherein the acquiring a plurality of frames of the first image of the shooting scene comprises:
acquiring a plurality of frames of first preview images from the first image cache queue, and taking the plurality of frames of first preview images as a plurality of frames of first images;
The acquiring of the multiple frames of second images of the shooting scene includes:
and acquiring a frame of first preview image from the first image cache queue, acquiring a frame of second preview image from the second image cache queue, and taking the frame of first preview image and the frame of second preview image as a plurality of frames of second images.
7. the image processing method according to claim 1, wherein the acquiring a plurality of frames of the first image of the shooting scene comprises:
acquiring a plurality of frames of first images of the shooting scene according to the same exposure parameter through the first camera;
The acquiring of the multiple frames of second images of the shooting scene includes:
and acquiring multiple frames of second images of the shooting scene according to different exposure parameters through the second camera.
8. the image processing method according to any one of claims 1 to 7, wherein the synthesizing the plurality of frames of first images to obtain the first synthesized image with the high dynamic range in the close view comprises:
masking the long shot in the multiple frames of first images;
And performing high-dynamic image synthesis processing on the multi-frame first image subjected to mask processing to obtain a first synthesized image with a high dynamic range in the close range.
9. The image processing method according to any one of claims 1 to 7, wherein the synthesizing the plurality of frames of second images to obtain a second synthesized image with the high dynamic range of the distant view includes:
Masking the close shot in the plurality of frames of second images;
and performing high-dynamic image synthesis processing on the plurality of frames of second images after mask processing to obtain a second synthetic image with the high dynamic range of the long-range view.
10. the image processing method according to any one of claims 1 to 7, wherein the synthesizing the first synthesized image and the second synthesized image to obtain the target image with a high dynamic range includes:
Aligning the first composite image and the second composite image;
acquiring a first distant view weight value and a first near view weight value of the first synthetic image, and acquiring a second distant view weight value and a second near view weight value of the second synthetic image, wherein the first near view weight value is greater than the second near view weight value, and the first distant view weight value is less than the second distant view weight value;
and synthesizing the aligned first synthetic image and the aligned second synthetic image according to the first distant view weight value, the first close view weight value, the second distant view weight value and the second close view weight value to obtain a target image with a high dynamic range.
11. An image processing apparatus characterized by comprising:
The dividing module is used for dividing the shooting scene into a long shot and a short shot;
The first acquisition module is used for acquiring multiple frames of first images of the shooting scene, wherein the multiple frames of first images have the same exposure parameter;
the first synthesis module is used for carrying out synthesis processing on the multiple frames of first images to obtain a first synthesis image with a high dynamic range of the close view;
the second acquisition module is used for acquiring multiple frames of second images of the shooting scene, wherein the multiple frames of second images have different exposure parameters;
the second synthesis module is used for synthesizing the plurality of frames of second images to obtain a second synthesis image with the high dynamic range of the long-range view;
And the third synthesis module is used for synthesizing the first synthesis image and the second synthesis image to obtain a target image with a high dynamic range.
12. A storage medium having stored thereon a computer program, characterized in that, when the computer program runs on a computer, it causes the computer to execute the image processing method according to any one of claims 1 to 10.
13. An electronic device comprising a processor, a memory, a first camera and a second camera, the memory having a computer program, wherein the processor is configured to execute the image processing method according to any one of claims 1 to 10 by calling the computer program.
14. the electronic device of claim 13, wherein the first camera is a standard camera and the second camera is a tele camera.
CN201910791524.XA 2019-08-26 2019-08-26 Image processing method, image processing device, storage medium and electronic equipment Active CN110572584B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910791524.XA CN110572584B (en) 2019-08-26 2019-08-26 Image processing method, image processing device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910791524.XA CN110572584B (en) 2019-08-26 2019-08-26 Image processing method, image processing device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110572584A true CN110572584A (en) 2019-12-13
CN110572584B CN110572584B (en) 2021-05-07

Family

ID=68776090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910791524.XA Active CN110572584B (en) 2019-08-26 2019-08-26 Image processing method, image processing device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110572584B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110933326A (en) * 2019-12-16 2020-03-27 中国科学院云南天文台 Starry sky portrait shooting method and electronic equipment
CN111526299A (en) * 2020-04-28 2020-08-11 华为技术有限公司 High dynamic range image synthesis method and electronic equipment
CN111726533A (en) * 2020-06-30 2020-09-29 RealMe重庆移动通信有限公司 Image processing method, image processing device, mobile terminal and computer readable storage medium
CN111917950A (en) * 2020-06-30 2020-11-10 北京迈格威科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113810590A (en) * 2020-06-12 2021-12-17 华为技术有限公司 Image processing method, electronic device, medium, and system
CN113888452A (en) * 2021-06-23 2022-01-04 荣耀终端有限公司 Image fusion method, electronic device, storage medium, and computer program product
CN113992861A (en) * 2020-07-27 2022-01-28 虹软科技股份有限公司 Image processing method and image processing device
WO2022262259A1 (en) * 2021-06-15 2022-12-22 展讯通信(上海)有限公司 Image processing method and apparatus, and device, medium and chip
WO2023056785A1 (en) * 2021-10-09 2023-04-13 荣耀终端有限公司 Image processing method and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010273001A (en) * 2009-05-20 2010-12-02 Mega Chips Corp Image processor, imaging apparatus, and synthetic image generating method
JP2011221924A (en) * 2010-04-13 2011-11-04 Canon Inc Image processing device, display device, and photographing device
CN103095977A (en) * 2011-10-31 2013-05-08 佳能企业股份有限公司 Image capturing method and image processing system and image capturing device using image capturing method
JP2016208118A (en) * 2015-04-16 2016-12-08 キヤノン株式会社 Image processing apparatus, image processing method, and program
US20170214847A1 (en) * 2016-01-22 2017-07-27 Top Victory Investments Ltd. Method for Setting Shooting Parameters of HDR mode and Electronic Device Using the Same
CN108012080A (en) * 2017-12-04 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010273001A (en) * 2009-05-20 2010-12-02 Mega Chips Corp Image processor, imaging apparatus, and synthetic image generating method
JP2011221924A (en) * 2010-04-13 2011-11-04 Canon Inc Image processing device, display device, and photographing device
CN103095977A (en) * 2011-10-31 2013-05-08 佳能企业股份有限公司 Image capturing method and image processing system and image capturing device using image capturing method
JP2016208118A (en) * 2015-04-16 2016-12-08 キヤノン株式会社 Image processing apparatus, image processing method, and program
US20170214847A1 (en) * 2016-01-22 2017-07-27 Top Victory Investments Ltd. Method for Setting Shooting Parameters of HDR mode and Electronic Device Using the Same
CN108012080A (en) * 2017-12-04 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110933326A (en) * 2019-12-16 2020-03-27 中国科学院云南天文台 Starry sky portrait shooting method and electronic equipment
CN111526299A (en) * 2020-04-28 2020-08-11 华为技术有限公司 High dynamic range image synthesis method and electronic equipment
US11871123B2 (en) 2020-04-28 2024-01-09 Honor Device Co., Ltd. High dynamic range image synthesis method and electronic device
WO2021218536A1 (en) * 2020-04-28 2021-11-04 荣耀终端有限公司 High-dynamic range image synthesis method and electronic device
CN111526299B (en) * 2020-04-28 2022-05-17 荣耀终端有限公司 High dynamic range image synthesis method and electronic equipment
CN113810590A (en) * 2020-06-12 2021-12-17 华为技术有限公司 Image processing method, electronic device, medium, and system
CN111917950A (en) * 2020-06-30 2020-11-10 北京迈格威科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111726533B (en) * 2020-06-30 2021-11-16 RealMe重庆移动通信有限公司 Image processing method, image processing device, mobile terminal and computer readable storage medium
CN111917950B (en) * 2020-06-30 2022-07-22 北京迈格威科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111726533A (en) * 2020-06-30 2020-09-29 RealMe重庆移动通信有限公司 Image processing method, image processing device, mobile terminal and computer readable storage medium
CN113992861A (en) * 2020-07-27 2022-01-28 虹软科技股份有限公司 Image processing method and image processing device
WO2022021999A1 (en) * 2020-07-27 2022-02-03 虹软科技股份有限公司 Image processing method and image processing apparatus
WO2022262259A1 (en) * 2021-06-15 2022-12-22 展讯通信(上海)有限公司 Image processing method and apparatus, and device, medium and chip
CN113888452A (en) * 2021-06-23 2022-01-04 荣耀终端有限公司 Image fusion method, electronic device, storage medium, and computer program product
WO2023056785A1 (en) * 2021-10-09 2023-04-13 荣耀终端有限公司 Image processing method and electronic device

Also Published As

Publication number Publication date
CN110572584B (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN110572584B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110505411B (en) Image shooting method and device, storage medium and electronic equipment
CN111669493B (en) Shooting method, device and equipment
CN110445988B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110493538B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110213502B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110602467B (en) Image noise reduction method and device, storage medium and electronic equipment
CN111028190A (en) Image processing method, image processing device, storage medium and electronic equipment
CN112150399A (en) Image enhancement method based on wide dynamic range and electronic equipment
CN110381263B (en) Image processing method, image processing device, storage medium and electronic equipment
CN111327824B (en) Shooting parameter selection method and device, storage medium and electronic equipment
CN110266954B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110958401B (en) Super night scene image color correction method and device and electronic equipment
CN111601040B (en) Camera control method and device and electronic equipment
CN114092364A (en) Image processing method and related device
CN110581957B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110430370B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110290325B (en) Image processing method, image processing device, storage medium and electronic equipment
CN115526787B (en) Video processing method and device
US11503223B2 (en) Method for image-processing and electronic device
CN110740266B (en) Image frame selection method and device, storage medium and electronic equipment
CN110278375B (en) Image processing method, image processing device, storage medium and electronic equipment
CN111182208B (en) Photographing method and device, storage medium and electronic equipment
CN110278386B (en) Image processing method, image processing device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant