CN117560579A - Shooting processing method, shooting processing device, electronic equipment and storage medium - Google Patents

Shooting processing method, shooting processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117560579A
CN117560579A CN202311495408.6A CN202311495408A CN117560579A CN 117560579 A CN117560579 A CN 117560579A CN 202311495408 A CN202311495408 A CN 202311495408A CN 117560579 A CN117560579 A CN 117560579A
Authority
CN
China
Prior art keywords
detection result
image
original image
target
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311495408.6A
Other languages
Chinese (zh)
Inventor
朱文波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202311495408.6A priority Critical patent/CN117560579A/en
Publication of CN117560579A publication Critical patent/CN117560579A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting processing method, a shooting processing device, electronic equipment and a storage medium, wherein a first original image acquired by a first camera and a second original image acquired by a second camera are acquired, scene content detection is carried out on at least the first original image, a target detection result is obtained, image processing is carried out on the first original image based on the target detection result to obtain a first intermediate image, image processing is carried out on the second original image based on the target detection result to obtain a second intermediate image, and fusion processing is carried out on the first intermediate image and the second intermediate image to obtain a target image. According to the method and the device, scene content detection is carried out on at least the image collected by the first camera and the image collected by the second camera in the images collected by the first camera, and image processing is carried out on the image collected by the first camera and the image collected by the second camera based on the detection result, so that consistency of image processing can be guaranteed, and an obtained image effect is improved.

Description

Shooting processing method, shooting processing device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the technical field of electronic devices, and in particular, to a shooting processing method, a shooting processing device, an electronic device, and a storage medium.
Background
With the development of imaging technology, in the current camera lens support scheme of electronic devices, there are generally a plurality of different types of lens support. Moreover, the electronic device can collect images through a plurality of different types of lenses at the same time, however, there is a large difference in image processing corresponding to the different types of lenses, so that the finally obtained image effect is poor.
Disclosure of Invention
In view of the above problems, the present application proposes a shooting processing method, a device, an electronic apparatus, and a storage medium to solve the above problems.
In a first aspect, an embodiment of the present application provides a shooting processing method, where the method includes: acquiring a first original image acquired by a first camera and acquiring a second original image acquired by a second camera; detecting scene content of at least the first original image in the first original image and the second original image to obtain a target detection result; performing image processing on the first original image based on the target detection result to obtain a first intermediate image, and performing image processing on the second original image based on the target detection result to obtain a second intermediate image; and carrying out fusion processing on the first intermediate image and the second intermediate image to obtain a target image.
In a second aspect, an embodiment of the present application provides a photographing processing apparatus, including: the original image acquisition module is used for acquiring a first original image acquired by the first camera and acquiring a second original image acquired by the second camera; the detection result obtaining module is used for detecting scene content of at least the first original image in the first original image and the second original image to obtain a target detection result; performing image processing on the first original image based on the target detection result to obtain a first intermediate image, and performing image processing on the second original image based on the target detection result to obtain a second intermediate image; and the target image obtaining module is used for carrying out fusion processing on the first intermediate image and the second intermediate image to obtain a target image.
In a third aspect, embodiments of the present application provide an electronic device comprising a memory coupled to a processor and a processor, the memory storing instructions that when executed by the processor perform the above-described method.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having program code stored therein, the program code being callable by a processor to perform the above method.
According to the shooting processing method, the shooting processing device, the electronic equipment and the storage medium, the first original image acquired by the first camera is acquired, the second original image acquired by the second camera is acquired, at least the first original image in the first original image and the first original image in the second original image is subjected to scene content detection, the target detection result is obtained, the first original image is subjected to image processing based on the target detection result to obtain the first intermediate image, the second original image is subjected to image processing based on the target detection result to obtain the second intermediate image, the first intermediate image and the second intermediate image are subjected to fusion processing to obtain the target image, so that scene content detection is carried out on at least the first camera acquired image in the first camera acquired image and the second camera acquired image, and the image processing is carried out on the first camera acquired image and the second camera acquired image based on the detection result in a synchronous mode, the consistency of image processing can be ensured, and the obtained image effect is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a shooting processing method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a shooting processing method according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a shooting processing method according to an embodiment of the present application;
FIG. 4 is a first comparative schematic view of images acquired by different cameras according to an embodiment of the present application;
FIG. 5 is a second comparative schematic view of images acquired by different cameras according to an embodiment of the present application;
fig. 6 is a schematic flow chart of a shooting processing method according to an embodiment of the present application;
fig. 7 is a schematic flow chart of a shooting processing method according to an embodiment of the present application;
fig. 8 is a schematic flow chart of a shooting processing method according to an embodiment of the present application;
fig. 9 shows a block diagram of a shooting processing apparatus according to an embodiment of the present application;
fig. 10 shows a block diagram of an electronic device for executing a photographing processing method according to an embodiment of the present application;
fig. 11 shows a storage unit for storing or carrying program codes for implementing the photographing processing method according to the embodiment of the present application.
Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application.
In this case, the same set of processing strategies is used for the whole image, for example, the same strategy is used for global image brightening or noise reduction. However, with the development of technology, it is more prominent at present that the differentiation requirement of the user is met under the premise of limited system performance and function, that is, the differentiation processing is performed on the whole image according to the behavior or scene characteristics of the user, so that an image processing strategy based on the scene detection result is introduced. However, currently, the use scene of cameras of many electronic devices is a multi-shot scene or a dual-shot scene (such as blurring or SAT, etc.), and the difference of the characteristics of different cameras can easily cause the deviation of the scene recognition results of multiple images (acquired by different cameras), especially some scene detections affected by the viewing angle or brightness. This also results in obvious differences in the subsequent image processing strategies (different image processing strategies corresponding to the images acquired by different cameras), and affects the fusion processing between subsequent further images (possibly causing image jump or obvious effect differences in different areas (similar area contents)).
In general, the result of image scene detection is placed in a reserved area (different manufacturers have own algorithm integration and structural body design), so that image data given by different cameras has own area for storing metadata information; the scene or content detection based on the image is detected based on the historical frames of the single data or the same data (the other data is not referred to), so that the difference of the detection results is natural, that is, the possibility is provided.
For some scene detection, the accuracy of the result and the characteristics of the camera have a certain relationship, and the processing of some algorithms can perform differential processing according to the detection result. In a multi-shot scene or a double-shot scene, if detection results for a plurality of cameras are inconsistent, different algorithm processing strategies for the same area of two images are led to cause larger difference of the results; when new image generation or parameter updating is further performed based on the image fusion result, the new image generation or parameter updating is not consistent with the actual scene, and the final frame-out effect is affected.
In view of the above problems, the inventor has found through long-term research and proposed a shooting processing method, a device, an electronic device and a storage medium according to embodiments of the present application, by detecting scene content of at least a first camera image of an image acquired by a first camera and an image acquired by a second camera, and performing image processing on the image acquired by the first camera and the image acquired by the second camera synchronously based on a detection result, consistency of image processing can be ensured, and an obtained image effect is improved. The specific photographing processing method will be described in detail in the following examples.
Referring to fig. 1, fig. 1 is a flowchart illustrating a shooting processing method according to an embodiment of the present application. The method is used for detecting scene content of at least the image collected by the first camera and the image collected by the second camera, and synchronously carrying out image processing on the image collected by the first camera and the image collected by the second camera based on detection results, so that consistency of image processing can be ensured, and the obtained image effect is improved. In a specific embodiment, the photographing processing method is applied to the photographing processing apparatus 200 shown in fig. 9 and the electronic device 100 (fig. 10) provided with the photographing processing apparatus 200. In the following, the specific flow of the present embodiment will be described by taking an electronic device as an example, and it will be understood that the electronic device applied in the present embodiment may include a smart phone, a tablet computer, a wearable electronic device, and the like, which is not limited herein. The following will describe the flowchart shown in fig. 1 in detail, and the shooting processing method specifically may include the following steps:
step S110: and acquiring a first original image acquired by the first camera and acquiring a second original image acquired by the second camera.
The electronic device may include at least two cameras, for example, the electronic device may include a first camera and a second camera, or the electronic device may include a first camera and a plurality of second cameras, which are not limited herein. Optionally, the first camera may include an ultra-wide angle camera, a tele camera, an ultra-macro camera, or a depth camera, and the second camera may include an ultra-wide angle camera, a tele camera, an ultra-macro camera, or a depth camera, which is not limited herein.
The first camera may refer to a main camera currently used for image preview and image acquisition, and in this embodiment, the first original image may be obtained by performing image acquisition through the first camera.
In some embodiments, the first original image may include any one of an RGB (Red, green, blue) image, a gray scale image, a depth image, an image corresponding to a Y component in a YUV image, and the like, which is not limited herein. The "Y" in the YUV image represents brightness (luminence or Luma), that is, gray scale values, and the "U" and "V" represent chromaticity (Chroma) to describe the image color and saturation, which is used to designate the color of the pixel. The first original image may be an image acquired from any scene, for example, a person image, a landscape image, a building image, or the like, which is not limited herein.
The second camera may refer to a camera for assisting in capturing images, and the image data captured by the second camera is used for supplementing the image data captured by the first camera. In this embodiment, the second original image may be obtained by performing image acquisition with the second camera.
In some embodiments, the second original image may be any one of an RGB (Red, green, blue) image, a gray scale image, a depth image, an image corresponding to a Y component in a YUV image, and the like, which is not limited herein. The "Y" in the YUV image represents brightness (luminence or Luma), that is, gray scale values, and the "U" and "V" represent chromaticity (Chroma) to describe the image color and saturation, which is used to designate the color of the pixel. The second original image may be an image acquired from any scene, for example, a person image, a landscape image, a building image, or the like, which is not limited herein.
In some embodiments, a user may turn on a camera application of the electronic device to start a camera function, trigger a double-shot or multi-shot scene to start image preview/recording, and at this time, the electronic device may perform image acquisition through both the first camera and the second camera. As an embodiment, the user may activate the image capturing function by clicking on the "camera icon", may activate the image capturing function by inputting voice information, and the like, which is not limited herein.
As an example, the first camera may be a wide angle camera and the second camera may be an ultra wide angle camera; the first camera may be a wide-angle camera, the second camera may be a tele camera, etc., and is not limited herein.
In some embodiments, in the process of image capturing by the first camera and the second camera, the electronic device may control to update the image capturing parameters, for example, may update the shooting magnification, etc., and in addition, the electronic device may control to switch between the first camera and the second camera, which is not limited herein.
Step S120: and detecting scene content of at least the first original image in the first original image and the second original image to obtain a target detection result.
It can be understood that, because the first camera is the main camera currently used for image preview and image acquisition, and the second camera is the camera used for assisting in image acquisition, the acquired image data is used for supplementing the image data acquired by the first camera, so that the image acquired by the first camera (i.e. the first original image) can be used as the main image for scene content detection, in this embodiment, under the condition that the first original image and the second original image are obtained, at least the scene content detection can be performed on the first original image in the first original image and the second original image, and the target detection result is obtained. That is, in the case of obtaining the image collected by the first camera and the image collected by the second camera, at least the scene content detection may be performed on the image collected by the first camera from among the image collected by the first camera and the image collected by the second camera, so as to obtain the target detection result.
Taking the example of a double shot scene, the double shot scene generally includes w+uw or w+tele, that is, W must be present, so that for UW and Tele, when the detection results are inconsistent, the detection result of W may be used as the final detection result. Where W represents a first camera (typically in a wide-angle mode), in which the lens captures a wider angle of view, which may include more elements. UW is generally in an ultra-wide angle mode, which is a shooting mode with a wider angle of view than the normal wide angle mode, and a wider field of view can be captured. Tele is typically a Tele or Tele mode in which the lens captures details of a distance and enlarges objects at a distance.
As an embodiment, in the case of obtaining the first original image and the second original image, scene content detection may be performed only on the first original image, and the target detection result may be obtained.
As still another embodiment, in the case of obtaining the first original image and the second original image, the first original image may be subjected to scene content detection, and the second original image may be subjected to scene content detection, to obtain the target detection result.
In some embodiments, in a case where the first original image and the second original image are obtained, status information of the electronic device may be obtained, and scene content detection may be performed on the first original image or on the first original image and the second original image based on the status information of the electronic device. Alternatively, the status information of the electronic device may include one or a combination of several of a remaining power of the electronic device, an occupancy rate of a central processor of the electronic device, a temperature of the electronic device, and a load of the electronic device.
In some embodiments, the scene content detection may be performed on the image by a convolutional neural network, the scene content detection may be performed on the image by a support vector machine, the scene content detection may be performed on the image by a K-nearest neighbor algorithm, the scene content detection may be performed on the image by a bayesian classifier, the scene content detection may be performed on the image by a decision tree, the scene content detection may be performed on the image by a random forest, and the like, which is not limited herein.
In some embodiments, scene content detection of an image may include face detection of an image, building detection of an image, animal detection of an image, and the like, without limitation. The target detection result may include the number of faces, face area information, etc., which is not limited herein.
As an implementation mode, when the first camera and the second camera are used for image acquisition and preview, at least the scene detection module corresponding to the first camera can acquire the current movement trend of the first camera in real time, focus the change of parameters, and at least rapidly detect the scene content of the preview frame acquired by the first camera to obtain a target detection result.
Step S130: and performing image processing on the first original image based on the target detection result to obtain a first intermediate image, and performing image processing on the second original image based on the target detection result to obtain a second intermediate image.
In this embodiment, in the case of obtaining the target detection result, the first intermediate image may be obtained by performing image processing on the first original image based on the target detection result, and the second intermediate image may be obtained by performing image processing on the second original image based on the target detection result. It can be understood that, under the condition of obtaining the target detection result, the target detection result can be synchronously used for processing the first original image and the second original image, so that when the subsequent algorithm is used for image processing, the same algorithm processing strategy can be used for the same actual scene area (the same area corresponding to the actual scene in the main and auxiliary photographic images) in the first original image and the second original image, the consistency of the image processing result is ensured, the image effect is prevented from being weakened due to the differentiated processing strategy, and the use experience of a user is influenced.
In some embodiments, in the case of obtaining the target detection result, differentiated image processing such as brightness adjustment, white balance adjustment, exposure adjustment, contrast and saturation adjustment, sharpening processing, noise reduction processing, clipping and composition adjustment, filter processing, and the like may be performed on some areas of the first original image based on the target detection result to obtain the first intermediate image, while differentiated image processing such as brightness adjustment, white balance adjustment, exposure adjustment, contrast and saturation adjustment, sharpening processing, noise reduction processing, clipping and composition adjustment, filter processing, and the like may be performed on some areas of the second original image based on the target detection result to obtain the second intermediate image.
Step S140: and carrying out fusion processing on the first intermediate image and the second intermediate image to obtain a target image.
In this embodiment, in the case of obtaining the first intermediate image and the second intermediate image, the first intermediate image and the second intermediate image may be subjected to fusion processing to obtain the target image, that is, the final preview frame or the recording frame.
As one way, in the case where the first intermediate image and the second intermediate image are obtained, the fusion processing may be performed by means of pixel-level fusion. Alternatively, the target image may be obtained by performing fusion processing by combining pixel information of the first intermediate image and the second intermediate image.
As still another aspect, in the case where the first intermediate image and the second intermediate image are obtained, the fusion processing may be performed by means of feature level fusion. Alternatively, the target image may be obtained by extracting feature information in the first intermediate image and the second intermediate image and combining them for fusion processing.
As still another aspect, in the case of obtaining the first intermediate image and the second intermediate image, the fusion process may be performed by a decision-level fusion manner. Alternatively, the target image may be obtained by performing fusion processing on the decision results of the first intermediate image and the second intermediate image.
According to the shooting processing method, the first original image acquired by the first camera is acquired, the second original image acquired by the second camera is acquired, at least the first original image in the first original image and the first original image in the second original image are subjected to scene content detection, the target detection result is obtained, the first original image is subjected to image processing based on the target detection result to obtain the first intermediate image, the second original image is subjected to image processing based on the target detection result to obtain the second intermediate image, the first intermediate image and the second intermediate image are subjected to fusion processing to obtain the target image, and therefore scene content detection is carried out on at least the first camera acquired image and the second camera acquired image, and the first camera acquired image and the second camera acquired image are synchronously subjected to image processing based on the detection result, so that consistency of image processing can be ensured, and the obtained image effect is improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a shooting processing method according to an embodiment of the present application. The method is applied to the electronic device, and will be described in detail with respect to the flowchart shown in fig. 2, where the shooting processing method specifically includes the following steps:
step S210: and acquiring a first original image acquired by the first camera and acquiring a second original image acquired by the second camera.
The specific description of step S210 is referred to step S110, and will not be repeated here.
Step S220: and detecting scene content of the first original image to obtain a first detection result as the target detection result.
In the present embodiment, in the case where the first original image and the second original image are obtained, scene content detection may be performed on the first original image, and the first detection result is obtained as the target detection result. That is, in the case of obtaining the first original image and the second original image, it is possible to perform scene content detection only on the image (first original image) acquired by the first camera, and not on the image (second original image) acquired by the second camera, whereby it is possible to obtain a first detection result obtained by performing scene content detection on the first original image, and determine the first detection result as a target detection result.
Optionally, under the condition that the first original image and the second original image are obtained, the scene detection module corresponding to the first camera can be controlled to be started and the scene detection module corresponding to the second camera can be controlled to be closed, so that in the process of detecting scene content, the scene content of the first original image can be detected only through the scene detection module corresponding to the first camera, the system power consumption of the electronic equipment in a multi-shot or double-shot scene can be reduced, and meanwhile, the first detection result corresponding to the first camera is directly determined to be a target detection result and used for image processing of the first original image and the second original image, and consistency of the image detection result and processing can be ensured.
Step S230: and performing image processing on the first original image based on the target detection result to obtain a first intermediate image, and performing image processing on the second original image based on the target detection result to obtain a second intermediate image.
Step S240: and carrying out fusion processing on the first intermediate image and the second intermediate image to obtain a target image.
The specific description of step S230 to step S240 refer to step S130 to step S140, and are not described herein.
Compared with the photographing processing method shown in fig. 1, the photographing processing method provided by the embodiment of the invention further performs scene content detection on the first original image to obtain the first detection result as the target detection result, at this time, the image detection function of the second camera can be closed, and the system power consumption of the electronic device can be saved on the basis of ensuring the consistency of image processing.
Referring to fig. 3, fig. 3 is a flowchart illustrating a shooting processing method according to an embodiment of the present application. The method is applied to the electronic device, and will be described in detail with respect to the flowchart shown in fig. 3, where the shooting processing method specifically includes the following steps:
step S310: and acquiring a first original image acquired by the first camera and acquiring a second original image acquired by the second camera.
The specific description of step S310 is referred to step S110, and will not be repeated here.
Step S320: and detecting scene content of the first original image to obtain a first detection result, and detecting scene content of the second original image to obtain a second detection result.
In this embodiment, in the case of obtaining the first original image and the second original image, the first detection result may be obtained by performing scene content detection on the first original image, and the second detection result may be obtained by performing scene content detection on the second original image. That is, in the case of obtaining the first original image and the second original image, the scene content detection may be performed on the image (first original image) acquired by the first camera, and the scene content detection may be performed on the image (second original image) acquired by the second camera, whereby the first detection result obtained by the scene content detection performed on the first original image and the second detection result obtained by the scene content detection performed on the second original image may be obtained.
Optionally, under the condition that the first original image and the second original image are obtained, the scene detection module corresponding to the first camera can be controlled to be started and the scene detection module corresponding to the second camera can be controlled to be started, so that in the process of detecting scene content, the scene content of the first original image can be detected through the scene detection module corresponding to the first camera, and the scene content of the second original image can be detected through the scene detection module corresponding to the second camera, so that the diversity and accuracy of the scene content detection can be improved.
Step S330: and updating the first detection result based on the second detection result to obtain a first target detection result, and updating the second detection result based on the first detection result to obtain a second target detection result.
In this embodiment, in the case of obtaining the first detection result and the second detection result, the first detection result may be updated based on the second detection result, and the updated first detection result may be determined as the first target detection result; the second detection result may be updated based on the first detection result, and the updated second detection result may be determined as the second target detection result.
In some embodiments, in the case of obtaining the first detection result and the second detection result, a detection result synchronization policy may be determined, the first detection result is updated based on the detection result synchronization policy and the second detection result to obtain a first target detection result, and the second detection result is updated based on the detection result synchronization policy and the first detection result to obtain a second target detection result.
In some embodiments, in the case of obtaining the first detection result and the second detection result, the first detection result and the second detection result may be compared to determine a difference between the first detection result and the second detection result, the first detection result is updated based on the difference and the second detection result to obtain the first target detection result, and the second detection result is updated based on the difference and the first detection result to obtain the second target detection result.
As an embodiment, in the case where the first detection result and the second detection result are obtained, the first detection result and the second detection result may be compared to determine a difference between the first detection result and the second detection result, determine whether the first detection result needs to be updated and whether the second detection result needs to be updated based on the difference, update the first detection result based on the difference and the second detection result to obtain the first target detection result if the update is determined to be required, and update the second detection result based on the difference and the first detection result to obtain the second target detection result; and if the first detection result is determined to be the first target detection result, and the second detection result is determined to be the second target detection result. Optionally, if the difference characterizes that the deviation between the first detection result and the second detection result is larger, it is determined that the update is required, and if the difference characterizes that the deviation between the first detection result and the second detection result is smaller, it is determined that the update is not required.
For many scenes (such as portrait scenes), the ratio of a certain portrait in the whole image is small due to the magnification difference and the viewing angle difference, and then the portrait detection algorithm cannot detect the portrait, so that different algorithm processing strategies can be selected for processing the same scene area corresponding to the image (for example, the area where the portrait is detected is lightened, and the normal processing is performed for the area where the portrait is not detected). Based on this, whether synchronous updating of the detection results is needed or not can be determined according to the detection results of the scene content of the original images acquired by the first camera and the second camera, and a general principle is directed updating (for example, as shown in fig. 4, the first camera (such as the wide-angle camera) detects that a portrait object exists in a central area or is an object a moving at a high speed, and the second camera (such as the ultra-wide-angle camera) does not detect, then the portrait object or the object a moving at the high speed can be synchronized to the detection results corresponding to the second camera, the second camera (such as the ultra-wide-angle camera) detects that a portrait object exists in a right area or is an object b and c moving at the high speed, and the first camera (such as the wide-angle camera) does not detect, then the portrait object or the object b and c moving at the high speed can be synchronized to the detection results corresponding to the first camera); however, for some detection results of special scene contents, whether to update and update directions can be further determined according to the characteristics of the scene, for example, the result of corresponding face detection can be combined with the position of the face to determine whether to update the face (for example, as shown in fig. 5, the second camera (such as the ultra-wide-angle camera) detects the face, but the first camera (the wide-angle camera) does not detect the face, where, due to the larger view angle of the ultra-wide-angle camera, although the face near the edge of the image is detected, the face in this area may not be reflected in the image data collected by the wide-angle camera due to the smaller view angle of the wide-angle camera), so that the update to the detection result corresponding to the wide-angle camera is not needed.
Based on the analysis, updating the first detection result based on the second detection result to obtain the first target detection result may include: and acquiring first detection information included in the second detection result, wherein the first detection result does not include the first detection information, and updating the first detection information to the first detection result to obtain a first target detection result. As an example, assuming that the second detection result includes the first face, and the first detection result does not include the first face, the first face may be synchronized to the first detection result to obtain the first target detection result, so that the first target detection result and the second target detection result that are finally obtained both include the first face.
As one implementation manner, the first detection information is updated to the first detection result to obtain a first target detection result: and determining corresponding parameter information of the first detection information in the second original image, wherein the parameter information can comprise at least one of quantity information and position information, and if the parameter information meets the target parameter information, synchronizing the first detection information into a first detection result to obtain a first target detection result. As one mode, if the parameter information satisfies the target parameter information, and the first camera is a wide-angle camera and the second camera is an ultra-wide-angle camera, the first detection information is synchronized to the first detection result, and the first target detection result is obtained.
As an example, assuming that the second detection result includes a first face, the first detection result does not include the first face, if the parameter information corresponding to the first face meets the target parameter information, the first face may be synchronized to the first detection result to obtain a first target detection result, so that the first target detection result and the second target detection result that are finally obtained both include the first face; if the parameter information corresponding to the first face does not meet the target parameter information, the first face may not be synchronized to the first detection result. For example, if the first face is located at the edge position of the second original image, it may be considered that the parameter information corresponding to the first face does not satisfy the target parameter information.
Based on the analysis, updating the second detection result based on the first detection result to obtain the second target detection result may include: and acquiring second detection information included in the first detection result, wherein the second detection result does not include the second detection information, and updating the second detection information to the second detection result to obtain a second target detection result. As an example, assuming that the first detection result includes the second face, and the second detection result does not include the second face, the second face may be synchronized to the second detection result to obtain the second target detection result, so that the finally obtained first target detection result and the second target detection result both include the second face.
As an implementation manner, updating the second detection information into the second detection result to obtain the second target detection result includes: and determining corresponding parameter information of the second detection information in the first original image, wherein the parameter information can comprise at least one of quantity information and position information, and if the parameter information meets the target parameter information, synchronizing the second detection information to a second detection result to obtain a second target detection result. As one mode, if the parameter information satisfies the target parameter information, and the first camera is an ultra-wide angle camera and the second camera is a wide angle camera, the second detection information is synchronized to the second detection result, and the second target detection result is obtained.
As an example, assuming that the second detection result includes a second face, the second detection result does not include the second face, if the parameter information corresponding to the second face meets the target parameter information, the second face may be synchronized to the second detection result to obtain a second target detection result, so that the first target detection result and the second target detection result that are finally obtained both include the second face; if the parameter information corresponding to the second face does not meet the target parameter information, the second face may not be synchronized to the second detection result. For example, if the second face is located at the edge position of the first original image, it may be considered that the parameter information corresponding to the second face does not satisfy the target parameter information.
Based on the analysis, updating the second detection result based on the first detection result to obtain the second target detection result may include: if the detection results of the third detection information included in the first detection result and the second detection result are different, synchronously updating the detection results of the third detection information included in the second detection result into the detection results of the third detection information included in the first detection result, and obtaining a second target detection result. The first original image acquired by the first camera and the second original image acquired by the second camera may include the same object, and when the first original image and the second original image are detected to obtain a first detection result and a second detection result, the first detection result and the second detection result may each include third detection information corresponding to the same object, and it may be understood that detection results corresponding to the third detection information in the first detection result and the second detection result may be the same or different, where, when the first detection result corresponding to the first camera is different, the detection result of the third detection information included in the second detection result may be synchronously updated to the detection result of the third detection information included in the first detection result, so as to obtain a second target detection result. For example, if the first detection result includes a third face, and the detection result is a face corresponding to a male, the second detection result includes the third face, and the detection result is a face corresponding to a female, the detection result of the third face in the second detection result may be synchronously updated to be a face corresponding to a male.
In some embodiments, updating the first detection result based on the second detection result to obtain the first target detection result, and updating the second detection result based on the first detection result to obtain the second target detection result may include: and adding the first detection result into first metadata corresponding to the first original image, adding the second detection result into second metadata corresponding to the second original image, comparing the first metadata added with the first detection result with the second metadata added with the second detection result to obtain a comparison result, if the first detection result and the second detection result are determined to have differences based on the comparison result, updating the first detection result based on the second detection result to obtain a first target detection result, and updating the second detection result based on the first detection result to obtain a second target detection result.
The first original image acquired by the first camera corresponds to first metadata, the second original image acquired by the second camera corresponds to second metadata, the first detection result and the second detection result can be filled into the first metadata and the second detection result can be filled into the second metadata under the condition that the first detection result and the second detection result are obtained, accordingly, the first detection result can be transmitted to the calibration updating module along with the first metadata, the second detection result can be transmitted to the calibration updating module along with the second metadata, and when the first metadata carrying the first detection result and the second metadata carrying the second detection result reach the calibration updating module, the first detection result and the second detection result can be compared through the calibration updating module to obtain a comparison result, and whether synchronous updating of the first detection result and/or the second detection result is needed or not is determined according to the comparison result.
In some embodiments, updating the first detection result based on the second detection result to obtain the first target detection result, and updating the second detection result based on the first detection result to obtain the second target detection result may include: the method comprises the steps of determining the visual angle difference between a first camera and a second camera, mapping detection information to be synchronously updated in a second detection result based on the visual angle difference, synchronously updating the detection information to be synchronously updated in the first detection result to obtain a first target detection result, and mapping the detection information to be synchronously updated in the first detection result based on the visual angle difference, synchronously updating the detection information to be synchronously updated in the second detection result to obtain a second target detection result, so that synchronous updating of the detection results among different cameras is more accurate.
It should be noted that some detection information is also required in some previous processing flows, and also, for example, the face detection is taken as an example, for the SAT function, the results of the face detection for the double shot are brought in during the first crop, and mapping is performed based on the results, if the face detection results of the two cameras are synchronized into the same coordinate and area at this time, the position of the final crop is affected, that is, the alignment effect of the final SAT may be further affected. Therefore, in this embodiment, the synchronization of the detection results may result in an algorithm for determining the position set for synchronization of the detection results, so as to avoid adverse effects on the processing results of other algorithms.
Step S340: and performing image processing on the first original image based on the first target detection result to obtain the first intermediate image, and performing image processing on the second original image based on the second target detection result to obtain the second intermediate image.
In the present embodiment, in the case where the first target detection result and the second target detection result are obtained, the first intermediate image may be obtained by performing image processing on the first original image based on the first target detection result, and the second intermediate image may be obtained by performing image processing on the second original image based on the second target detection result. It can be understood that, due to the fact that the first target detection result and the second target detection result are synchronously updated, the same scene area of the first original image and the second original image can be ensured to be processed by using the same algorithm processing strategy, the unification and the duration of the preview or shooting effect can be ensured, and the effect jump caused by the detection difference is avoided.
In some embodiments, performing image processing on the first original image based on the first target detection result to obtain a first intermediate image, and performing image processing on the second original image based on the second target detection result to obtain a second intermediate image may include: updating the first metadata based on the first target detection result to obtain first target metadata, updating the second metadata based on the second target detection result to obtain second target metadata, inputting the first target metadata and the first original image into corresponding image processing algorithms to perform image processing to obtain a first intermediate image, and inputting the second target metadata and the second metadata image into corresponding image processing algorithms to perform image processing to obtain a second intermediate image. Under the condition that the first target metadata and the second target metadata are obtained, the updated target metadata can be transmitted to a subsequent image processing algorithm along with the corresponding original image to carry out subsequent image processing, and accordingly, the subsequent image processing algorithm can analyze image information corresponding to different cameras and target detection results in the image after receiving the target metadata and the original image to carry out image differentiation processing.
In some embodiments, performing image processing on the first original image based on the first target detection result to obtain a first intermediate image, and performing image processing on the second original image based on the second target detection result to obtain a second intermediate image may include: and based on the first target detection result and the second target detection result, obtaining a third target detection result, and inputting the third target detection result, the first original image and the second original image into a corresponding image processing algorithm to perform image processing to obtain a first intermediate image and a second intermediate image. Alternatively, the third target detection result may include the same portion and the different portion of the first target detection result and the second target detection result.
In this embodiment, the target detection result may not be transmitted with the frame (for example, the target detection result is not added to metadata any more), but the integrated target detection result (the third target detection result) is determined based on the first target detection result and the second target detection result, or a channel is opened for transmission, so that the difference correlation of cameras is avoided, and the current scene of the sum of the detection results is guaranteed. Meanwhile, as the transmission is carried out in a mode of comprehensive target detection results, the same parts in the first target detection results and the second target detection results can be transmitted only once, so that the transmission quantity of data is effectively reduced, and the system power consumption is reduced.
Step S350: and carrying out fusion processing on the first intermediate image and the second intermediate image to obtain a target image.
The specific description of step S350 refers to step S140, and is not repeated here.
Compared with the photographing processing method shown in fig. 1, in the photographing processing method provided by the embodiment of the present application, the first original image is subjected to scene content detection to obtain a first detection result, the second original image is subjected to scene content detection to obtain a second detection result, the first detection result is updated based on the second detection result to obtain a first target detection result, the second detection result is updated based on the first detection result to obtain a second target detection result, the first original image is subjected to image processing based on the first target detection result to obtain a first intermediate image, and the second original image is subjected to image processing based on the second target detection result to obtain a second intermediate image, so that the scene content detection is performed through the first camera and the second camera, and the detection results are updated synchronously with each other, so that the image processing effect can be improved under the condition that the consistency of the image processing can be ensured.
Referring to fig. 6, fig. 6 is a flowchart illustrating a shooting processing method according to an embodiment of the present application. The method is applied to the electronic device, and will be described in detail with respect to the flowchart shown in fig. 6, where the shooting processing method specifically includes the following steps:
Step S410: and acquiring a first original image acquired by the first camera and acquiring a second original image acquired by the second camera.
The specific description of step S410 is referred to step S110, and will not be repeated here.
Step S420: and performing first scene content detection on the first original image to obtain a third detection result, and performing second scene content detection on the second original image to obtain a fourth detection result.
The method can also detect or process some scene contents on one path and detect or process other scene contents on the other path according to the characteristics of the camera and the second camera under the condition of starting multiple cameras, and can synthesize the detection results of the two paths when the detection results are used, so that the system power consumption can be well saved on the premise of ensuring the effect.
In this embodiment, in the case of obtaining the first original image and the second original image, the first original image may be subjected to first scene content detection to obtain a third detection result, and the second original image may be subjected to second scene content detection to obtain a fourth detection result. As an example, assuming that 5 scene content detections are required for the original image, a third detection result may be obtained by performing 3 of the scene content detections for the first original image, and a fourth detection result may be obtained by performing another 2 of the scene content detections for the second original image.
In some embodiments, the camera type corresponding to the first camera may be determined, the camera type corresponding to the second camera may be determined, and the first scene content corresponding to the first camera and the second scene content corresponding to the second camera may be determined based on the camera type corresponding to the first camera and the camera type corresponding to the second camera, so that the respectively processed scene content may be more adapted to the respective corresponding camera types, and the processing effect may be better.
Step S430: and obtaining the target detection result based on the third detection result and the fourth detection result.
Step S440: and performing image processing on the first original image based on the target detection result to obtain the first intermediate image, and performing image processing on the second original image based on the target detection result to obtain the second intermediate image.
Step S450: and carrying out fusion processing on the first intermediate image and the second intermediate image to obtain a target image.
The specific description of step S440 to step S450 is referred to step S130 to step S140, and will not be repeated here.
Compared with the photographing processing method shown in fig. 1, the photographing processing method provided in the embodiment of the present application further performs first scene content detection on the first original image to obtain a third detection result, performs second scene content detection on the second original image to obtain a fourth detection result, and obtains a target detection result based on the third detection result and the fourth detection result, thereby performing different scene content detection on different images, and integrating the respective detection results to obtain a target detection result, and saving system power consumption on the premise of ensuring the effect.
Referring to fig. 7, fig. 7 is a flowchart illustrating a shooting processing method according to an embodiment of the present application. The method is applied to the electronic device, and will be described in detail with respect to the flowchart shown in fig. 7, where the shooting processing method specifically includes the following steps:
step S510: and acquiring a first original image acquired by the first camera and acquiring a second original image acquired by the second camera.
Step S520: and detecting scene content of at least the first original image in the first original image and the second original image to obtain a target detection result.
The specific description of step S510 to step S520 refers to step S110 to step S120, and is not repeated here.
Step S530: and determining an algorithm processing strategy corresponding to the target detection result.
In the present embodiment, in the case where the target detection result is obtained, an algorithm processing policy corresponding to the target detection result may be determined.
In some embodiments, a mapping relationship may be preset and stored, where the mapping relationship may include a correspondence between a plurality of detection results and a plurality of algorithm processing policies, where the correspondence may include that one detection result corresponds to one algorithm processing policy, that one detection result corresponds to a plurality of algorithm processing policies, or that a plurality of detection results corresponds to one algorithm processing policy, and so on, and is not limited herein. In this embodiment, in the case of obtaining the target detection result, the algorithm processing policy corresponding to the target detection result may be determined based on the mapping relationship.
As an example, if the target detection result represents that a portrait exists, the determined algorithm processing policy is to perform face brightening, and if the target detection result represents that a sky exists, the determined algorithm processing policy is to perform sky color adjustment, and the like, which is not limited herein.
Step S540: and processing the first original image based on the algorithm processing strategy to obtain the first intermediate image, and processing the second original image based on the algorithm processing strategy to obtain the second intermediate image.
In this embodiment, in the case of determining the algorithm processing policy, the first intermediate image may be obtained by processing the first original image based on the algorithm processing policy, and the second intermediate image may be obtained by processing the second original image based on the algorithm processing policy, so as to achieve consistency of image processing between the first original image and the second original image.
Step S550: and carrying out fusion processing on the first intermediate image and the second intermediate image to obtain a target image.
The specific description of step S550 is referred to step S140, and will not be repeated here.
Compared with the photographing processing method shown in fig. 1, the photographing processing method provided in an embodiment of the present application further determines an algorithm processing policy corresponding to the target detection result, processes the first original image based on the algorithm processing policy to obtain a first intermediate image, and processes the second original image based on the algorithm processing policy to obtain a second intermediate image, thereby ensuring consistency of image processing.
Referring to fig. 8, fig. 8 is a flowchart illustrating a shooting processing method according to an embodiment of the present application. The method is applied to the electronic device, and will be described in detail with respect to the flowchart shown in fig. 8, where the shooting processing method specifically includes the following steps:
step S610: and acquiring a first original image acquired by the first camera and acquiring a second original image acquired by the second camera.
The specific description of step S610 refers to step S110, and is not repeated here.
Step S620: a remaining power of the electronic device is determined.
In this embodiment, the remaining power of the electronic device where the first camera/the second camera are located may be determined.
In some embodiments, the electronic device may be preset with power detection software, and the remaining power of the electronic device may be determined through the power detection software.
Step S630: and if the residual electric quantity is higher than an electric quantity threshold value, detecting scene contents of the first original image and the second original image to obtain the target detection result.
In some embodiments, the electronic device may preset and store a power threshold, where the power threshold is used as a basis for determining the remaining power of the electronic device. Therefore, in the present embodiment, in the case where the remaining power is obtained, the remaining power may be compared with the power threshold to determine whether the remaining power is higher than the power threshold.
If the residual electric quantity is determined to be higher than the electric quantity threshold value, the residual electric quantity of the electronic equipment is characterized to be sufficient, and the electronic equipment is enough to execute some high-power-consumption events, scene content detection can be carried out on the first original image and the second original image, and a target detection result is obtained, so that the accuracy of the determined target detection result can be improved.
Step S640: and if the residual electric quantity is lower than or equal to an electric quantity threshold value, detecting scene content of the first original image to obtain the target detection result.
If the residual electric quantity is determined to be lower than or equal to the electric quantity threshold value, the residual electric quantity of the electronic equipment is insufficient, and the electronic equipment is not supported to execute some high-power-consumption events, scene content detection can be performed on the first original image, a target detection result is obtained, and therefore system power consumption of the electronic equipment can be reduced.
Step S650: and performing image processing on the first original image based on the target detection result to obtain a first intermediate image, and performing image processing on the second original image based on the target detection result to obtain a second intermediate image.
Step S660: and carrying out fusion processing on the first intermediate image and the second intermediate image to obtain a target image.
The specific description of step S650 to step S660 refer to step S130 to step S140, and are not described herein.
Compared with the photographing processing method shown in fig. 1, the photographing processing method provided by the embodiment of the invention further determines the residual electric quantity of the electronic device, performs scene content detection on the first original image and the second original image to obtain a target detection result when the residual electric quantity is higher than the electric quantity threshold value, and performs scene content detection on the first original image to obtain the target detection result when the residual electric quantity is lower than or equal to the electric quantity threshold value, thereby balancing the residual electric quantity and power consumption of the electronic device and improving the use experience of a user.
Referring to fig. 9, fig. 9 is a block diagram illustrating a shooting processing apparatus according to an embodiment of the present application. The photographing processing apparatus 200 is applied to the above-described electronic device, and will be described below with respect to a block diagram shown in fig. 9, the photographing processing apparatus 200 including: an original image acquisition module 210, a detection result acquisition module 220, an intermediate image acquisition module 230, and a target image acquisition module 240, wherein:
the original image obtaining module 210 is configured to obtain a first original image collected by the first camera, and obtain a second original image collected by the second camera.
The detection result obtaining module 220 is configured to perform scene content detection on at least the first original image in the first original image and the second original image, so as to obtain a target detection result.
Further, the detection result obtaining module 220 includes: a first target detection result obtaining sub-module, wherein:
and the first target detection result obtaining sub-module is used for detecting scene content of the first original image and obtaining a first detection result as the target detection result.
Further, the detection result obtaining module 220 includes: a first detection result obtaining sub-module, wherein: the device comprises a first detection result obtaining sub-module and a second target detection result obtaining sub-module, wherein:
the first detection result obtaining sub-module is used for carrying out scene content detection on the first original image to obtain a first detection result, and carrying out scene content detection on the second original image to obtain a second detection result.
A second target detection result obtaining sub-module, configured to obtain a first target detection result based on the second detection result updating the first detection result, and obtain a second target detection result based on the first detection result updating the second detection result;
Further, the second target detection result obtaining submodule includes: a first detection information acquisition unit and a first updating unit, wherein:
a first detection information obtaining unit, configured to obtain first detection information included in the second detection result, where the first detection result does not include the first detection information.
And the first updating unit is used for updating the first detection information to the first detection result to obtain the first target detection result.
Further, the first updating unit includes: a parameter information determining unit and a first updating subunit, wherein:
and a parameter information determining subunit, configured to determine parameter information corresponding to the first detection information in the second original image, where the parameter information includes at least one of quantity information and position information.
And the first updating subunit synchronously updates the first detection information to the first detection result to obtain the first target detection result if the parameter information meets the target parameter information.
Further, the second target detection result obtaining submodule includes: a second detection information acquisition unit and a second updating unit, wherein:
A second detection information obtaining unit, configured to obtain second detection information included in the first detection result, where the second detection result does not include the second detection information.
And a second updating unit, configured to update the second detection information to the second detection result to obtain the second target detection result.
Further, the second target detection result obtaining submodule includes: a third updating unit in which:
and a third updating unit, configured to synchronously update a detection result of the third detection information included in the second detection result to a detection result of the third detection information included in the first detection result to obtain the second detection result if the detection results of the third detection information included in the first detection result and the second detection result are different.
Further, the second target detection result obtaining submodule includes: the device comprises a detection result adding unit, a comparison result obtaining unit and a second target detection result obtaining unit, wherein:
and the detection result adding unit is used for adding the first detection result to the first metadata corresponding to the first original image and adding the second detection result to the second metadata corresponding to the second original image.
And the comparison result obtaining unit is used for comparing the first metadata added with the first detection result with the second metadata added with the second detection result to obtain a comparison result.
And a second target detection result obtaining unit configured to obtain the first target detection result based on a second detection result if it is determined that there is a difference between the first detection result and the second detection result based on the comparison result, and obtain the second target detection result based on the first detection result and the second detection result.
The second target detection result obtaining submodule includes: a viewing angle difference determining unit, a first target detection result obtaining unit, and a second target detection result obtaining unit, wherein:
and the visual angle difference determining unit is used for determining the visual angle difference between the first camera and the second camera.
And the first target detection result obtaining unit is used for carrying out mapping processing on detection information to be synchronously updated in the second detection result based on the visual angle difference, and synchronously updating the detection information to the first detection result to obtain the first target detection result.
And the second target detection result obtaining unit is used for carrying out mapping processing on detection information to be synchronously updated in the first detection result based on the visual angle difference, and synchronously updating the detection information to the second detection result to obtain the second target detection result.
Further, the detection result obtaining module 220 includes: the second detection result obtaining sub-module and the third target detection result obtaining sub-module, wherein:
and the second detection result obtaining sub-module is used for carrying out first scene content detection on the first original image to obtain a third detection result, and carrying out second scene content detection on the second original image to obtain a fourth detection result.
And a third target detection result obtaining sub-module, configured to obtain the target detection result based on the third detection result and the fourth detection result.
Further, the detection result obtaining module 220 further includes: a camera type determination sub-module and a scene content determination sub-module, wherein:
the camera type determining submodule is used for determining the camera type corresponding to the first camera and determining the camera type corresponding to the second camera.
The scene content determining sub-module is used for determining first scene content corresponding to the first camera and second scene content corresponding to the second camera based on the camera type corresponding to the first camera and the camera type corresponding to the second camera.
Further, the detection result obtaining module 220 includes: the device comprises a residual electric quantity determining sub-module, a fifth target detection result obtaining sub-module and a sixth target detection result obtaining sub-module, wherein:
and the residual electric quantity determining sub-module is used for determining the residual electric quantity of the electronic equipment.
And the fourth target detection result obtaining sub-module is used for detecting scene contents of the first original image and the second original image if the residual electric quantity is higher than an electric quantity threshold value, so as to obtain the target detection result.
And a fifth target detection result obtaining sub-module, configured to perform scene content detection on the first original image if the remaining power is lower than or equal to the power threshold, to obtain the target detection result.
An intermediate image obtaining module 230, configured to obtain a first intermediate image by performing image processing on the first original image based on the target detection result, and obtain a second intermediate image by performing image processing on the second original image based on the target detection result.
Further, the intermediate image obtaining module 230 includes: a first intermediate image acquisition sub-module, wherein:
the first intermediate image obtaining sub-module is used for carrying out image processing on the first original image based on the first target detection result to obtain the first intermediate image, and carrying out image processing on the second original image based on the second target detection result to obtain the second intermediate image.
Further, the first intermediate image obtaining submodule includes: a metadata updating unit and a first intermediate image obtaining unit, wherein:
and the metadata updating unit is used for updating the first metadata based on the first target detection result to obtain first target metadata and updating the second metadata based on the second target detection result to obtain second target metadata.
The first intermediate image obtaining unit is used for inputting the first target metadata and the first original image into a corresponding image processing algorithm to perform image processing to obtain the first intermediate image, and inputting the second target metadata and the second original image into a corresponding image processing algorithm to perform image processing to obtain the second intermediate image.
Further, the first intermediate image obtaining submodule includes: a third target detection result obtaining unit and a second intermediate image obtaining unit, wherein:
and a third target detection result obtaining unit configured to obtain a third target detection result based on the first target detection result and the second target detection result.
And the second intermediate image obtaining unit is used for inputting the third target detection result, the first original image and the second original image into a corresponding image processing algorithm to perform image processing so as to obtain the first intermediate image and the second intermediate image.
Further, the intermediate image obtaining module 230 includes: an algorithm processing strategy determination sub-module and a second intermediate image acquisition sub-module, wherein:
and the algorithm processing strategy determining submodule is used for determining an algorithm processing strategy corresponding to the target detection result.
The second intermediate image obtaining sub-module is used for processing the first original image based on the algorithm processing strategy to obtain the first intermediate image, and processing the second original image based on the algorithm processing strategy to obtain the second intermediate image.
The target image obtaining module 240 is configured to obtain a target image by performing fusion processing on the first intermediate image and the second intermediate image.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and modules described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
In several embodiments provided herein, the coupling of the modules to each other may be electrical, mechanical, or other.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
Referring to fig. 10, a block diagram of an electronic device 100 according to an embodiment of the present application is shown. The electronic device 100 may be a smart phone, a tablet computer, an electronic book, or the like capable of running an application program. The electronic device 100 in this application may include one or more of the following components: a processor 110, a memory 120, and one or more application programs, wherein the one or more application programs may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more program(s) configured to perform the method as described in the foregoing method embodiments.
Wherein the processor 110 may include one or more processing cores. The processor 110 utilizes various interfaces and lines to connect various portions of the overall electronic device 100, perform various functions of the electronic device 100, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120, and invoking data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 110 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for being responsible for rendering and drawing the content to be displayed; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 110 and may be implemented solely by a single communication chip.
The Memory 120 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Memory 120 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing functions (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described below, and the like. The storage data area may also store data created by the electronic device 100 in use (e.g., phonebook, audiovisual data, chat log data), and the like.
Referring to fig. 11, a block diagram of a computer readable storage medium according to an embodiment of the present application is shown. The computer readable medium 300 has stored therein program code which can be invoked by a processor to perform the methods described in the method embodiments described above.
The computer readable storage medium 300 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer readable storage medium 300 comprises a non-volatile computer readable medium (non-transitory computer-readable storage medium). The computer readable storage medium 300 has storage space for program code 310 that performs any of the method steps described above. The program code can be read from or written to one or more computer program products. Program code 310 may be compressed, for example, in a suitable form.
In summary, the shooting processing method, the device, the electronic equipment and the storage medium provided by the embodiment of the application acquire the first original image acquired by the first camera and acquire the second original image acquired by the second camera, at least perform scene content detection on the first original image in the first original image and the second original image, obtain a target detection result, perform image processing on the first original image based on the target detection result to obtain a first intermediate image, perform image processing on the second original image based on the target detection result to obtain a second intermediate image, and perform fusion processing on the first intermediate image and the second intermediate image to obtain a target image, so that the consistency of image processing can be ensured and the obtained image effect can be improved by performing scene content detection on at least the first camera acquired image and the first camera acquired image in the images acquired by the second camera and performing image processing on the first camera acquired image acquired by the first camera based on the detection result.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, one of ordinary skill in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not drive the essence of the corresponding technical solutions to depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (18)

1. A photographing processing method, characterized in that the method comprises:
acquiring a first original image acquired by a first camera and acquiring a second original image acquired by a second camera;
detecting scene content of at least the first original image in the first original image and the second original image to obtain a target detection result;
performing image processing on the first original image based on the target detection result to obtain a first intermediate image, and performing image processing on the second original image based on the target detection result to obtain a second intermediate image;
and carrying out fusion processing on the first intermediate image and the second intermediate image to obtain a target image.
2. The method according to claim 1, wherein the performing scene content detection on at least the first original image of the first original image and the second original image to obtain a target detection result includes:
and detecting scene content of the first original image to obtain a first detection result as the target detection result.
3. The method according to claim 1, wherein the performing scene content detection on at least the first original image of the first original image and the second original image to obtain a target detection result includes:
Performing scene content detection on the first original image to obtain a first detection result, and performing scene content detection on the second original image to obtain a second detection result;
updating the first detection result based on the second detection result to obtain a first target detection result, and updating the second detection result based on the first detection result to obtain a second target detection result;
the image processing of the first original image based on the target detection result to obtain a first intermediate image, and the image processing of the second original image based on the target detection result to obtain a second intermediate image, includes:
and performing image processing on the first original image based on the first target detection result to obtain the first intermediate image, and performing image processing on the second original image based on the second target detection result to obtain the second intermediate image.
4. The method of claim 3, wherein the updating the first detection result based on the second detection result to obtain a first target detection result comprises:
acquiring first detection information included in the second detection result, wherein the first detection result does not include the first detection information;
And updating the first detection information to the first detection result to obtain the first target detection result.
5. The method of claim 4, wherein the updating the first detection information to the first detection result to obtain the first target detection result comprises:
determining corresponding parameter information of the first detection information in the second original image, wherein the parameter information comprises at least one of quantity information and position information;
and if the parameter information meets the target parameter information, synchronously updating the first detection information to the first detection result to obtain the first target detection result.
6. The method of claim 3, wherein the updating the second detection result based on the first detection result to obtain a second target detection result comprises:
acquiring second detection information included in the first detection result, wherein the second detection result does not include the second detection information;
and updating the second detection information to the second detection result to obtain the second target detection result.
7. The method of claim 3, wherein the updating the second detection result based on the first detection result to obtain a second target detection result comprises:
And if the detection results of the third detection information included in the first detection result and the second detection result are different, synchronously updating the detection results of the third detection information included in the second detection result into the detection results of the third detection information included in the first detection result to obtain the second target detection result.
8. The method of claim 3, wherein the updating the first detection result based on the second detection result to obtain a first target detection result and the updating the second detection result based on the first detection result to obtain a second target detection result comprises:
adding the first detection result to first metadata corresponding to the first original image, and adding the second detection result to second metadata corresponding to the second original image;
comparing the first metadata added with the first detection result with the second metadata added with the second detection result to obtain a comparison result;
and if the first detection result and the second detection result are determined to be different based on the comparison result, updating the first detection result based on the second detection result to obtain the first target detection result, and updating the second detection result based on the first detection result to obtain the second target detection result.
9. The method of claim 8, wherein the image processing the first original image based on the first object detection result to obtain the first intermediate image, and the image processing the second original image based on the second object detection result to obtain the second intermediate image, comprises:
updating the first metadata based on the first target detection result to obtain first target metadata, and updating the second metadata based on the second target detection result to obtain second target metadata;
and inputting the first target metadata and the first original image into a corresponding image processing algorithm to perform image processing to obtain the first intermediate image, and inputting the second target metadata and the second original image into a corresponding image processing algorithm to perform image processing to obtain the second intermediate image.
10. The method of claim 8, wherein the image processing the first original image based on the first object detection result to obtain the first intermediate image, and the image processing the second original image based on the second object detection result to obtain the second intermediate image, comprises:
Obtaining a third target detection result based on the first target detection result and the second target detection result;
and inputting the third target detection result, the first original image and the second original image into a corresponding image processing algorithm to perform image processing to obtain the first intermediate image and the second intermediate image.
11. The method of claim 3, wherein the updating the first detection result based on the second detection result to obtain a first target detection result and the updating the second detection result based on the first detection result to obtain a second target detection result comprises:
determining a difference in viewing angle between the first camera and the second camera;
mapping the detection information to be synchronously updated in the second detection result based on the visual angle difference, and synchronously updating the detection information to the first detection result to obtain the first target detection result;
and mapping the detection information to be synchronously updated in the first detection result based on the visual angle difference, and synchronously updating the detection information to the second detection result to obtain the second target detection result.
12. The method according to claim 1, wherein the performing scene content detection on at least the first original image of the first original image and the second original image to obtain a target detection result includes:
performing first scene content detection on the first original image to obtain a third detection result, and performing second scene content detection on the second original image to obtain a fourth detection result;
and obtaining the target detection result based on the third detection result and the fourth detection result.
13. The method of claim 12, wherein before performing the first scene content detection on the first original image to obtain a third detection result and performing the second scene content detection on the second original image to obtain a fourth detection result, further comprising:
determining a camera type corresponding to the first camera and determining a camera type corresponding to the second camera;
and determining first scene content corresponding to the first camera and second scene content corresponding to the second camera based on the camera type corresponding to the first camera and the camera type corresponding to the second camera.
14. The method according to any one of claims 1-13, wherein the image processing the first original image based on the target detection result to obtain a first intermediate image, and the image processing the second original image based on the target detection result to obtain a second intermediate image, comprises:
determining an algorithm processing strategy corresponding to the target detection result;
and processing the first original image based on the algorithm processing strategy to obtain the first intermediate image, and processing the second original image based on the algorithm processing strategy to obtain the second intermediate image.
15. The method according to any one of claims 1-13, wherein performing scene content detection on at least the first original image of the first original image and the second original image to obtain a target detection result includes:
determining the residual electric quantity of the electronic equipment;
if the residual electric quantity is higher than an electric quantity threshold value, detecting scene contents of the first original image and the second original image to obtain the target detection result; or alternatively
And if the residual electric quantity is lower than or equal to an electric quantity threshold value, detecting scene content of the first original image to obtain the target detection result.
16. A photographing processing apparatus, characterized in that the apparatus comprises:
the original image acquisition module is used for acquiring a first original image acquired by the first camera and acquiring a second original image acquired by the second camera;
the detection result obtaining module is used for detecting scene content of at least the first original image in the first original image and the second original image to obtain a target detection result;
the intermediate image obtaining module is used for carrying out image processing on the first original image based on the target detection result to obtain a first intermediate image, and carrying out image processing on the second original image based on the target detection result to obtain a second intermediate image;
and the target image obtaining module is used for carrying out fusion processing on the first intermediate image and the second intermediate image to obtain a target image.
17. An electronic device comprising a memory and a processor, the memory coupled to the processor, the memory storing instructions that when executed by the processor perform the method of any of claims 1-15.
18. A computer readable storage medium having stored therein program code which is callable by a processor to perform the method of any one of claims 1-15.
CN202311495408.6A 2023-11-09 2023-11-09 Shooting processing method, shooting processing device, electronic equipment and storage medium Pending CN117560579A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311495408.6A CN117560579A (en) 2023-11-09 2023-11-09 Shooting processing method, shooting processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311495408.6A CN117560579A (en) 2023-11-09 2023-11-09 Shooting processing method, shooting processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117560579A true CN117560579A (en) 2024-02-13

Family

ID=89819645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311495408.6A Pending CN117560579A (en) 2023-11-09 2023-11-09 Shooting processing method, shooting processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117560579A (en)

Similar Documents

Publication Publication Date Title
CN109639982B (en) Image noise reduction method and device, storage medium and terminal
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN106899781B (en) Image processing method and electronic equipment
CN110839129A (en) Image processing method and device and mobile terminal
WO2018176925A1 (en) Hdr image generation method and apparatus
CN110300264B (en) Image processing method, image processing device, mobile terminal and storage medium
CN111028189A (en) Image processing method, image processing device, storage medium and electronic equipment
CN112153272B (en) Image shooting method and electronic equipment
US9253406B2 (en) Image capture apparatus that can display review image, image capture method, and storage medium
CN110266954A (en) Image processing method, device, storage medium and electronic equipment
CN110958401A (en) Super night scene image color correction method and device and electronic equipment
US20220329729A1 (en) Photographing method, storage medium and electronic device
WO2022151852A1 (en) Image processing method, apparatus, and system, electronic device, and storage medium
CN111787230A (en) Image display method and device and electronic equipment
US10769416B2 (en) Image processing method, electronic device and storage medium
CN108259767B (en) Image processing method, image processing device, storage medium and electronic equipment
CN116055895B (en) Image processing method and device, chip system and storage medium
CN110503042B (en) Image processing method and device and electronic equipment
CN108495038B (en) Image processing method, image processing device, storage medium and electronic equipment
CN116437222B (en) Image processing method and electronic equipment
CN113744139A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115426449B (en) Photographing method and terminal
CN116437198B (en) Image processing method and electronic equipment
CN117560579A (en) Shooting processing method, shooting processing device, electronic equipment and storage medium
JP2010041607A (en) Image capturing apparatus, method of controlling the same and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination