WO2023005450A1 - 图像处理方法、装置、终端及存储介质 - Google Patents

图像处理方法、装置、终端及存储介质 Download PDF

Info

Publication number
WO2023005450A1
WO2023005450A1 PCT/CN2022/097953 CN2022097953W WO2023005450A1 WO 2023005450 A1 WO2023005450 A1 WO 2023005450A1 CN 2022097953 W CN2022097953 W CN 2022097953W WO 2023005450 A1 WO2023005450 A1 WO 2023005450A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
target area
area
currently processed
Prior art date
Application number
PCT/CN2022/097953
Other languages
English (en)
French (fr)
Inventor
朱文波
Original Assignee
哲库科技(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 哲库科技(上海)有限公司 filed Critical 哲库科技(上海)有限公司
Publication of WO2023005450A1 publication Critical patent/WO2023005450A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Definitions

  • the embodiments of the present application relate to the field of computer technologies, and in particular to an image processing method, device, terminal, and storage medium.
  • Embodiments of the present application provide an image processing method, device, terminal, and storage medium, which can improve the quality of captured images.
  • the technical solution is as follows:
  • an image processing method executed by a terminal, the method comprising:
  • a third image is generated based on the first image and the second image.
  • an image processing device comprising:
  • a target area determination module configured to determine multiple target areas in the currently processed image
  • a shooting parameter determination module configured to determine a first shooting parameter based on image quality characteristics of a first target area among the plurality of target areas; determine a second shooting parameter based on image quality characteristics of a second target area among the multiple target areas ; Wherein, the second shooting parameter is at least partially different from the first shooting parameter;
  • An image acquisition module configured to acquire a first image captured based on the first shooting parameters from a first image acquisition device; wherein, the first image is later than the currently processed image in frame timing; from the first The image acquisition device acquires a second image captured based on the second shooting parameters; wherein, the second image is later than the first image in frame timing;
  • An image generating module configured to generate a third image based on the first image and the second image.
  • a terminal includes a processor and a memory; the memory stores at least one instruction, and the at least one instruction is used to be executed by the processor to implement the following: The image processing method described in the above aspects.
  • a computer-readable storage medium stores at least one instruction, and the at least one instruction is used to be executed by a processor to realize the image described in the above aspect Approach.
  • a computer program product stores at least one program code, and the at least one program code is loaded and executed by a processor to realize the image processing described in the above aspect method.
  • FIG. 1 shows a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application
  • FIG. 2 shows a flowchart of an image processing method provided by an exemplary embodiment of the present application
  • FIG. 3 shows a flowchart of an image processing method provided by an exemplary embodiment of the present application
  • Fig. 4 shows a schematic diagram of an image processing process provided by an exemplary embodiment of the present application
  • Fig. 5 shows a schematic diagram of multiple target areas in an image provided by an exemplary embodiment of the present application
  • Fig. 6 shows a schematic diagram of an image processing process provided by an exemplary embodiment of the present application
  • Fig. 7 shows a structural block diagram of an image processing device provided by an exemplary embodiment of the present application.
  • Fig. 8 shows a structural block diagram of a terminal provided by an exemplary embodiment of the present application.
  • Fig. 9 shows a structural block diagram of a server provided by an exemplary embodiment of the present application.
  • the "plurality” mentioned herein means two or more.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B may indicate: A exists alone, A and B exist simultaneously, and B exists independently.
  • the character “/” generally indicates that the contextual objects are an "or” relationship.
  • first, second, third, “fourth” and the like used in the present application may be used herein to describe various concepts, but unless otherwise specified, these concepts are not limited by these terms. These terms are only used to distinguish one concept from another. For example, a first target area could be termed a target area, and, similarly, a second target area could be termed a first target area, without departing from the scope of the present application.
  • An image processing method executed by a terminal, the method comprising:
  • a third image is generated based on the first image and the second image.
  • the method further includes:
  • said second processing parameter is at least partially different from said first processing parameter.
  • the method before generating the third image based on the first image and the second image, the method further includes:
  • said fourth processing parameter is at least partially different from said third processing parameter.
  • the determining multiple target areas in the currently processed image includes:
  • the target object is an object belonging to the target type
  • the identified area is determined as the target area, or a circumscribed area of the identified area is determined as the target area.
  • the identifying the region where the target object in the currently processed image is located includes:
  • object features corresponding to the target type where the object features refer to features of target objects belonging to the target type
  • the determining the region in the currently processed image that matches the feature of the object as the region where the target object is located includes:
  • the area formed by the plurality of pixel points is determined as the area where the target object is located.
  • the determining multiple target areas in the currently processed image includes:
  • the second image acquisition device performs eye tracking to determine the key point corresponding to the gaze point of the eye in the currently processed image, and the first image acquisition device and the second The shooting range of the image acquisition device is different;
  • the area to which the key point belongs is determined as the target area.
  • the determining multiple target areas in the currently processed image includes:
  • the determining multiple target areas in the currently processed image includes:
  • the first image acquisition device if the first image acquisition device is in a motion state, determine a motion area in the currently processed image as the target area, and the motion area is a pair in motion
  • the state of the object is obtained by photographing.
  • the method before identifying the area where the target object in the currently processed image is located, the method further includes:
  • the determining the first shooting parameter based on the image quality characteristics of the first target area among the multiple target areas includes:
  • the brightness of the first target area is positively correlated with the aperture value, and the brightness of the first target area is negatively correlated with the exposure time.
  • the generating the third image based on the first image and the second image includes:
  • the first image is fused with the second image to obtain the third image.
  • the method before generating the third image based on the first image and the second image, the method further includes at least one of the following:
  • the method before generating the third image based on the first image and the second image, the method further includes at least one of the following:
  • the performing differential processing on the first target area and other areas in the first image includes:
  • the performing differential processing on the second target area and other areas in the second image includes:
  • the second target area in the second image is processed, while other areas in the second image are not processed.
  • the method further includes:
  • a target video is generated based on the currently processed image and the third image, and the sequence of the currently processed image in the target video is before the third image.
  • the solution provided by the embodiment of the present application does not only focus on one area in the image when shooting an image, but focuses on multiple target areas in the first image that has been captured, and targets the first target in the multiple target areas
  • the image quality characteristics of the area and the second target area respectively determine shooting parameters, since the first shooting parameter is determined based on the image quality characteristics of the first target area, and the second shooting parameter is determined based on the image quality characteristics of the second target area, Therefore, in the first image captured based on the first shooting parameters, the image quality of the first target area is high, and in the second image captured based on the second shooting parameters, the image quality of the second target area is high, so based on In the third image generated from the first image and the second image, the image quality of the first target area and the second target area are relatively high, which ensures the image quality of multiple areas in the third image taken, and improves the quality of shooting. image quality.
  • the embodiment of the present application provides an image processing method, which is executed by a terminal, and the terminal can capture images through the method provided in the present application to obtain multiple high-quality images of target areas.
  • the terminals are various types of terminals such as mobile phones, cameras, desktop computers, notebook computers, and tablet computers.
  • the terminal includes multiple image acquisition devices, for example, a first image acquisition device and a second image acquisition device.
  • the first image acquisition device is a rear camera of the terminal.
  • the second image acquisition device The collection device is a front camera of the terminal.
  • the terminal can perform image capture through any image acquisition device, and other image acquisition devices are on standby.
  • the implementation environment of the embodiment of the present application includes a server 101 and a terminal 102, wherein the terminal 102 takes pictures of the current scene, obtains the currently processed image, and sends the current image to the server 101.
  • the server 101 determines a plurality of target areas in the currently processed image, determines first shooting parameters based on the image quality characteristics of the first target area in the multiple target areas, and determines a first shooting parameter based on the image quality of the second target area in the multiple target areas.
  • the feature determines the second shooting parameters, and sends the first shooting parameters and the second shooting parameters to the terminal 102 .
  • the terminal 102 receives the first shooting parameters and the second shooting parameters, and after shooting the first image based on the first shooting parameters, obtains the second image based on the second shooting parameters, and sends the first image and the second image to the server 101, and the server 101 Generate a third image based on the first image and the second image.
  • a target application provided by the server 101 is installed on the terminal 102, and the terminal 102 can implement functions such as data transmission and message interaction through the target application.
  • the target application is a target application in the operating system of the terminal 102, or a target application provided by a third party.
  • the target application has the functions of capturing images and processing images.
  • the target application can also have other functions, such as sharing images, shooting videos, etc., which is not limited in this embodiment of the present application.
  • the target application is a short video application, a camera application, a shopping application, a chat application or other applications.
  • the image processing method provided in the embodiment of the present application can be applied in the scene of image capturing. For example, when shooting an image, in order to ensure the quality of multiple target areas in the image, after the currently processed image is obtained by shooting, use the image to shoot through the method provided by this application, and the image quality of multiple target areas obtained is relatively high. Another image of high.
  • the image processing method provided in the embodiment of the present application can also be applied in the scene of shooting video. For example, when shooting a video, the method provided by the present application is used to shoot the next frame image of the video based on the captured image, so as to ensure the image quality of multiple target areas in the next frame image.
  • Fig. 2 shows a flowchart of an image processing method provided by an exemplary embodiment of the present application. Referring to Fig. 2, the method includes:
  • the terminal determines multiple target areas in a currently processed image.
  • the currently processed image is obtained by shooting the current scene.
  • the target area includes objects in the scene.
  • the current scene is any scene, for example, the current scene is an indoor scene including various delicacies, an indoor scene including multiple characters, an outdoor scene including various plants, and the like.
  • the currently processed image is an original image taken from the current scene, or the currently processed image is an image after processing the original image.
  • the currently processed image is in any format, for example, the currently processed image is in a RAW (original) format, which is not limited in this embodiment of the present application.
  • the target area is any area in the currently processed image, for example, the target area is a ROI (region of interest, region of interest) in the currently processed image.
  • the shape of the target area is arbitrary.
  • the shape of the target area is rectangle, circle, or other irregular shapes.
  • the shape of the target area is the shape of the object in the target area.
  • the target area includes objects in the scene.
  • the scene is an indoor scene including various delicacies, and the target area includes delicacies.
  • the scene is an indoor scene including multiple characters, and the target area includes characters.
  • the terminal determines a first shooting parameter based on the image quality characteristics of a first target area among the multiple target areas, and determines a second shooting parameter based on the image quality characteristics of a second target area among the multiple target areas.
  • the second recording parameter is at least partially different from the first recording parameter.
  • the shooting parameter corresponding to each target area instructs the first image acquisition device to focus on the object in the target area.
  • the first shooting parameter indicates to focus the first image acquisition device on the object in the first target area
  • the second shooting parameter indicates to focus the first image acquisition device on the object in the second target area.
  • the shooting parameters corresponding to the target area can adjust the brightness of the target area in the captured image.
  • the first shooting parameter corresponding to the first target area can adjust the brightness of the first target area in the captured image
  • the second shooting parameter corresponding to the second target area can adjust the brightness of the second target in the captured image.
  • the brightness of the area can also achieve other effects, which are not limited in this embodiment of the present application.
  • the image quality feature of the target area indicates the image quality of the target area, that is, the image quality feature of the target area is a feature that can reflect the image quality of the target area, for example, the image quality feature of the target area includes the definition, brightness, etc. of the target area. If the definition of the target area is high and the brightness is within the threshold range, it indicates that the image quality of the target area is high; if the definition of the target area is low and the brightness is not within the threshold range, it indicates that the image quality of the target area is low.
  • the shooting parameters include arbitrary parameters, for example, focusing parameters, exposure time, aperture value, etc., and the shooting parameters can determine the quality of the captured image.
  • the shooting parameters determined for the target area can ensure the image quality of the target area in the image captured based on the shooting parameters.
  • the first image acquisition device is any image acquisition device in the terminal, for example, the first image acquisition device is a rear camera of the terminal, which is not limited in this embodiment of the present application.
  • the terminal acquires, from the first image acquisition device, the first image captured based on the first shooting parameters and the second image captured based on the second shooting parameters.
  • the frame timing of the first image is later than the currently processed image, and the second image is later than the first image in frame timing.
  • both the currently processed image and the first image include the first target area, and the first target area in the currently processed image and the first target area in the first image are areas obtained by photographing the same object.
  • Both the currently processed image and the second image include a second target area, and the second target area in the currently processed image and the second target area in the second image are areas obtained by photographing the same object.
  • the terminal generates a third image based on the first image and the second image.
  • the terminal fuses the first image and the second image to obtain the third image.
  • the embodiment of the present application is described by taking the current processed image as an example including the first target area and the second target area.
  • the current processed image also includes other target areas, for example, the third target area , in this case, the terminal determines the third shooting parameter based on the image quality characteristics of the third target area, acquires the fourth image taken based on the third shooting parameter from the first image acquisition device, and then based on the first image, the second image and The fourth image generates the third image, wherein the fourth image is later than the second image in frame timing.
  • the terminal determines corresponding shooting parameters for multiple target areas in the currently processed image, sequentially captures images based on the multiple determined shooting parameters, and generates a third image based on the captured multiple frames of images.
  • the embodiment of the present application does not limit the number of target regions in the currently processed image.
  • the solution provided by the embodiment of the present application does not only focus on one area in the image when shooting an image, but focuses on multiple target areas in the first image that has been captured, and targets the first target in the multiple target areas
  • the image quality characteristics of the area and the second target area respectively determine shooting parameters, since the first shooting parameter is determined based on the image quality characteristics of the first target area, and the second shooting parameter is determined based on the image quality characteristics of the second target area, Therefore, in the first image captured based on the first shooting parameters, the image quality of the first target area is high, and in the second image captured based on the second shooting parameters, the image quality of the second target area is high, so based on In the third image generated from the first image and the second image, the image quality of the first target area and the second target area are relatively high, which ensures the image quality of multiple areas in the third image taken, and improves the quality of shooting. image quality.
  • Fig. 3 shows a flowchart of an image processing method provided by an exemplary embodiment of the present application, and the embodiment provides various methods for determining a target area in an image.
  • the method includes:
  • the terminal determines multiple target areas in the currently processed image.
  • the terminal determines multiple target areas in the currently processed image, including:
  • the terminal identifies the area where the target object in the currently processed image is located, and the target object is an object belonging to the target type; the terminal determines the identified area as the target area, or determines the circumscribed area of the identified area as the target area.
  • the circumscribing area is an area within the smallest circumscribing rectangle of the area identified by the terminal, or an area within the smallest circumscribing circle of the identified area, or an area within the smallest circumscribing ellipse of the identified area, and the like. It should be noted that, since the currently processed image may include regions where multiple target objects are located, there may be multiple target regions determined by the terminal.
  • the target type is used to indicate the type of the target object in the currently processed image.
  • the object type is used to indicate the type of the object that the user is interested in in the currently processed image.
  • the target type is any type, for example, the target type is a food type, a character type, a landscape type, a building type, etc., which is not limited in this embodiment of the present application.
  • the terminal Before identifying the area where the target object in the currently processed image is located, the terminal first acquires the target type. In a possible implementation manner, the terminal acquires an input target type. For example, before shooting the currently processed image or after shooting the currently processed image, the terminal displays a type selection interface, which includes multiple object types, and the terminal obtains the object type selected from the type selection interface, and determines the object type as target type. For example, before taking an image, the user will select a shooting scene in the shooting scene interface, wherein the shooting scene interface includes multiple shooting scenes such as a portrait scene and a food scene, from which the user can select a shooting scene before shooting. Among them, the shooting scene interface is the type selection interface, and the shooting scene in the shooting scene interface is the object type, for example, the portrait scene represents the person type, and the gourmet scene represents the food type.
  • the target type since the target type is input by the user, the target type can accurately reflect the target object that the user is interested in, and the subsequent determination of the target area based on the area where the target object is located can ensure that the acquired third image In , the image quality of the area where the target object of interest to the user is located.
  • the terminal performs type identification on the currently processed image to obtain the target type. For example, the terminal identifies the type of the currently processed image, obtains the type of the currently processed image, and determines the type of the currently processed image as the target type. For example, most areas in the currently processed image are grasslands, and the recognized type of the currently processed image is the landscape type, and the terminal determines the landscape type as the target type.
  • the currently processed image includes multiple persons, and the recognized type of the currently processed image is the person type, and the terminal determines the person type as the target type.
  • the terminal identifies the type of the currently processed image through an image type identification model to obtain the type of the currently processed image.
  • the image type identification model is obtained through training samples including images and image type labels, and can identify the image type of any image.
  • the terminal can also identify the type of the currently processed image in other ways, which is not limited in this embodiment of the present application.
  • the currently processed image since the currently processed image is taken by the user, the currently processed image itself can reflect the target object that the user is interested in. Therefore, the type of the currently processed image is identified, and the obtained target type can represent the user's feeling.
  • the type of the target object of interest, and subsequently determine the target area based on the area where the target object belonging to the target type is located, can ensure the image quality of the area where the target object of interest to the user is located in the acquired third image.
  • the terminal after the terminal recognizes the area where the target object is located, it can directly determine the identified area as the target area. In this way, in the third image obtained by subsequent image processing based on the target area, the area where the target object is located The image quality of the area can be guaranteed. Or, after the terminal recognizes the area where the target object is located, it determines the circumscribed area of the area as the target area. Since the area where the target object is located is generally irregular in shape, subsequent image processing of the target area will increase the terminal's Data processing pressure. Since the shape of the circumscribed area is more regular than that of the area where the target object is located, determining the circumscribed area of the area where the target object is located as the target area can reduce the data processing pressure of the terminal.
  • the terminal identifies the area where the target object is located in the currently processed image, including: the terminal determines the object feature corresponding to the target type, and determines the area in the currently processed image that matches the object feature as the target object Area. Since the region in the currently processed image that matches the object feature of the target object is most likely to be the region where the target object is located, the accuracy of determining the region where the target object is located in this manner is high.
  • the object feature refers to the feature of the target object belonging to the target type.
  • the features of the target object refer to image features of the target object.
  • the target object is a person, and the image features of the person include image features with structures such as eyes, nose, mouth, and limbs.
  • the target object is a building, and the image features of the building include image features having various regular geometric shapes and the like.
  • the terminal stores the corresponding relationship between the target type and the object features of the target objects belonging to the target type.
  • the terminal stores the corresponding relationship between food types and food features, the corresponding relationship between character types and character features, the corresponding relationship between building types and building features, and the like.
  • the terminal determines the object feature corresponding to the target type based on the correspondence between the target type and the object feature stored in the terminal.
  • the terminal determines the area that matches the feature of the object in the currently processed image as the area where the target object is located, including: the terminal determines a plurality of pixel points in the currently processed image based on the object feature, and the multiple pixels The distribution feature of the points matches the feature of the object; the terminal determines the area formed by the plurality of pixel points as the area where the target object is located. Since the multiple pixel points whose distribution features match the object features are most likely to be the pixel points constituting the area of the target object, the accuracy of determining the area where the target object is located in this way is high.
  • the terminal determines the area that matches the feature of the object in the currently processed image as the area where the target object is located, including: the terminal determines a plurality of pixel points in the currently processed image based on the object feature, and the multiple pixels The distribution feature and color feature of the points match the feature of the object; the terminal determines the area formed by the plurality of pixel points as the area where the target object is located. Since the color feature of a pixel point can also reflect the object corresponding to the pixel point, when determining the area where the target object is located in the currently processed image, the color feature and distribution feature of the pixel point are combined to determine the object feature in the currently processed image The matched multiple pixel points can improve the accuracy of determining the area where the target object is located.
  • the terminal can also use the gaze point of the user's eyeballs when taking the image to determine the target area in the currently processed image. target area. That is, during the process of capturing the currently processed image, the terminal performs eye tracking through the second image acquisition device to determine the key point corresponding to the gaze point of the eye in the currently processed image; and determines the area to which the key point belongs as the target area.
  • the area to which the key point belongs is the area where the object corresponding to the key point is located.
  • the shooting ranges of the first image acquisition device and the second image acquisition device are different.
  • the first image collection device is a rear camera of the terminal
  • the second image collection device is a front camera of the terminal.
  • the second image acquisition device of the terminal can track the user's eyeballs to obtain information on the gaze point of the eyeballs.
  • the gaze point is mapped to the currently processed image to obtain key points corresponding to the gaze point in the currently processed image. Since the gaze point of the user's eyeballs will change during the process of shooting the currently processed image, correspondingly, the terminal will obtain information on multiple gaze points. In this case, the terminal will determine multiple gaze points in the currently processed image.
  • the key points corresponding to the points so as to obtain multiple target regions.
  • the gaze point of the eyeball is used to determine the target in the currently processed image area, the accuracy of the determined target area is high.
  • the terminal can also use the focal plane to determine the target area in the currently processed image. That is, the terminal determines the area where the first object focused by the first image acquisition device is located in the currently processed image; determines the area where the second object that is on the same focal plane as the first object is located in the currently processed image; The area where the object is located and the area where the second object is located are determined as target areas.
  • the plane perpendicular to the optical axis where the focal point of the first image acquisition device is located is the focal plane.
  • the first object focused by the first image acquisition device in the currently processed image is most likely to be the object of interest to the user, and multiple objects of interest to the user are often located on the same focal plane, therefore, Determining the area where the first object is located and the area where the second object is located on the same focal plane as the first object is determined as the target area, which enriches the number of target areas and can ensure the accuracy of the target area.
  • the terminal Before determining the area where the first object focused by the first image capture device is located in the currently processed image, the terminal first determines the first object that the first image capture device focuses on.
  • the terminal displays a preview image on the shooting interface during the process of capturing the currently processed image, and the terminal determines the object corresponding to the area as the first object in response to a trigger operation on any area in the preview image.
  • the terminal will focus on the object corresponding to the area in response to a trigger operation on any area of the preview image. Therefore, the object corresponding to the area triggered by the user is the first image acquisition The first object the device focuses on.
  • the terminal Before determining the area where the second object on the same focal plane as the first object is located in the currently processed image, the terminal first determines the second object on the same focal plane as the first object.
  • the terminal will detect the distance between the object in the scene and the image acquisition device.
  • the terminal determines the distance between the first object and the first image acquisition device. The first distance is to determine an object whose distance from the first image acquisition device is closer to the first distance as the second object.
  • there are one or more second objects which is not limited in this embodiment of the present application.
  • the terminal can also use the state of the image acquisition device to determine the target area in the currently processed image. That is, when the first image capture device is in a moving state during the process of capturing the currently processed image, the terminal determines the moving area in the currently processed image as the target area. Wherein, the moving area is obtained by photographing an object in a moving state.
  • the target area considering that during the process of capturing the currently processed image, if the first image acquisition device is in motion, it is most likely that the user is tracking and shooting an object in motion. In this case, the The area where the moving object in the currently processed image is determined as the target area, and the target area is most likely to be the area of interest to the user, which ensures the accuracy of the target area.
  • the terminal determines whether the first image acquisition device is in a motion state based on gyroscope (gyro) information of the terminal during the process of capturing the currently processed image.
  • gyro gyroscope
  • the terminal determines the motion area in the currently processed image based on the preview image in the shooting interface. That is, the terminal determines the object that always exists in the preview image as the object in motion, and determines the area where the object is located in the currently processed image as the motion area.
  • the terminal considering that when the user is shooting a moving object, the terminal will move along with the movement of the object, and other objects will change in the preview image in the shooting interface except for the object. That is to say, the object in motion always exists in the preview image, and the objects around the object in the preview image will be replaced. Therefore, the object that always exists in the preview image is determined as the object in motion, and the currently processed image
  • the area where the object is located is determined as the motion area, which can ensure the accuracy of the determined motion area.
  • the terminal may have obtained multiple images before capturing the currently processed image.
  • the terminal determines the common object in the multiple images as an object in motion. Considering that when a user is shooting a video of an object in motion, as the object moves, the object will be included in multiple captured images, while other objects in the multiple images will be different. Different, therefore, in the scene of video shooting, the common object in the multiple images that have been captured is determined as the object in motion, and the area where the object is located in the currently processed image is determined as the motion area, which can ensure the determination The accuracy of the range of motion.
  • the terminal determines a first shooting parameter based on an image quality feature of a first target area among the multiple target areas, and determines a second shooting parameter based on an image quality feature of a second target area among the multiple target areas.
  • the terminal will determine a shooting parameter according to the image quality characteristics of each target area.
  • the shooting parameter is used to make the first image acquisition device focus on the object in the target area.
  • the shooting parameter is also used to make the brightness of the target area where the object is located in the captured image be within a threshold range, which is not limited in this embodiment of the present application.
  • the terminal determines the shooting parameters based on the image quality characteristics of the target area, including: for each target area, the terminal determines the brightness of the target area; based on the brightness of the target area, determines At least one of an aperture value or an exposure duration; wherein, the brightness of the target area is positively correlated with the aperture value, and the brightness of the target area is negatively correlated with the exposure duration.
  • the terminal determines the brightness of the first target area; based on the brightness of the first target area, determines at least one of an aperture value or an exposure duration. Wherein, the brightness of the first target area is positively correlated with the aperture value, and the brightness of the first target area is negatively correlated with the exposure time.
  • the terminal determines the brightness of the second target area; based on the brightness of the second target area, at least one of an aperture value or an exposure duration is determined.
  • the brightness of the second target area is positively correlated with the aperture value
  • the brightness of the second target area is negatively correlated with the exposure time.
  • the terminal stores at least one of the correspondence between brightness and aperture value and the correspondence between brightness and exposure time.
  • the exposure value and the exposure time can affect the brightness of the captured image. Since the aperture value is negatively correlated with the brightness of the image captured according to the aperture value, that is, the smaller the aperture value is, the greater the brightness of the image captured according to the aperture value is.
  • the relationship between the brightness of the target area and the aperture value is set as a positive correlation, then when the brightness of the target area is relatively high, the determined aperture value will be relatively large, and the photographed image will be consistent with The brightness of the area where the target area matches will be reduced, so that the brightness of the first target area in the first image and the second target area in the second image can be adjusted to within the threshold range, ensuring that the first image and the second image The image quality of the target area in .
  • the exposure time is positively correlated with the brightness of the image captured according to the exposure time, that is, the longer the exposure time is, the greater the brightness of the image captured according to the exposure time is.
  • the relationship between the brightness of the target area and the exposure time is set as a negative correlation, then when the brightness of the target area is higher, the determined exposure time will be shorter, and the The brightness of the target area will be reduced, so that the brightness of the first target area in the first image and the second target area in the second image can be adjusted to within the threshold range, ensuring that the target in the first image and the second image The image quality of the area.
  • the terminal determines the shooting parameters based on the image quality characteristics of the target area, including: for each target area, the terminal determines the distance between the object in the target area and the first image acquisition device, based on the The distance determines a focusing parameter, which is used to make the first image acquisition device focus on the object in the target area.
  • the terminal detects the distance between each object in the scene and the first image acquisition device during the process of capturing the currently processed image, and stores the distance between each object and the first image acquisition device. In this way, after determining multiple target areas in the currently processed image, the terminal can locally acquire the distance between the object and the first image acquisition device for an object in each target area, and then determine the focus parameter based on the first distance .
  • the terminal determines the distance between the object in the first target area and the first image acquisition device, and determines the focus parameter based on the distance, and the focus parameter is used to make the first image acquisition device focus on the object in the first target area .
  • the terminal determines the distance between the object in the second target area and the first image acquisition device, and determines the focus parameter based on the distance, and the focus parameter is used to make the first image acquisition device focus on the object in the second target area. superior.
  • the terminal acquires, from the first image acquisition device, a first image captured based on the first shooting parameter and a second image captured based on the second shooting parameter.
  • the terminal determines a shooting parameter for each target area in the currently processed image, and acquires an image shot based on each shooting parameter from the first image acquisition device.
  • the terminal determines the first shooting parameters based on the image quality characteristics of the first target area, and after determining the second shooting parameters based on the image quality characteristics of the second target area, sends the first shooting parameters and the second shooting parameters to the second shooting parameter.
  • An image acquisition device the first image acquisition device performs shooting according to the first shooting parameters to obtain the first image, and performs shooting according to the second shooting parameters to obtain the second image. After the first image acquisition device captures the first image and the second image, it uploads the first image and the second image to the terminal.
  • the terminal determines a first target area in the first image and a second target area in the second image.
  • the terminal determining the first target area in the first image includes: the terminal determining a first target area with the same position in the first image based on a position of the first target area in the currently processed image.
  • the terminal determining the second target area in the second image includes: the terminal determining the second target area with the same position in the second image based on the position of the second target area in the currently processed image.
  • the shooting frame rate of the camera is relatively high, that is, the shooting time interval between the currently processed image and the first image and the second image is very short, therefore, the current processing image and the first image and the second image
  • the image difference is small, therefore, the position of the first target area in the current processed image has a small difference from the position of the first target area in the first image, and the position of the second target area in the current processed image is different from the position of the second target area
  • the position difference in the second image is small, therefore, directly based on the position of the first target area in the current processing image, determine the first target area with the same position in the first image, and based on the position of the second target area in the current processing image
  • the position of the second target area in the second image is determined at the same position. In the case of little influence on the accuracy of the target area in the first image and the second image, it can improve the determination of the target area in the first image and the second image. efficiency in the target area.
  • the terminal determines the first target area with the same position in the first image, and based on the position of the second target area in the current processed image, determines After the second target area in the same position in the second image, based on the motion information of the first image acquisition device during the shooting of the first image, the first target area in the first image is corrected, based on the process of shooting the second image
  • the motion information of the first image acquisition device is used to correct the second target area in the second image.
  • the motion information of the first image acquisition device includes gyro information generated by the terminal during the process of capturing images.
  • the first image acquisition device may shake during the process of taking the first image and the second image, which may cause the position of the first target area in the currently processed image to be different from that of the first target area
  • the position difference in the first image is relatively large, and the position of the second target area in the current processed image is quite different from the position of the second target area in the second image. Therefore, based on the process of taking the first image, the first The operation information of the image acquisition device corrects the first target area in the first image, based on the operation information of the first image acquisition device corrects the second target area in the second image during the shooting of the second image, which can improve the first image and the accuracy of the target area in the second image.
  • the terminal performs differential processing on the first target area and other areas in the first image, and performs differential processing on the second target area in the second image and other areas.
  • the terminal performs differential processing on the first target area in the first image and other areas, including: the terminal processes the first target area in the first image, but does not process the first target area in the first image other areas for processing.
  • the terminal performing differential processing on the second target area in the second image and other areas includes: the terminal processes the second target area in the second image without processing other areas in the second image.
  • the implementation manner in which the terminal processes the first target area in the first image includes: the terminal performs noise reduction on the first target area in the first image, adjusts brightness of the image of the first target area, and the like.
  • the implementation of the terminal processing the second target area in the second image includes: the terminal performs noise reduction on the second target area in the second image, adjusts the brightness of the image of the second target area, etc., the present application The embodiment does not limit this.
  • the first target area in the first image and the second target area in the second image are areas of interest to the user
  • the first target area in the first image and the second target area in the second image Processing the second target area without processing other areas in the first image and the second image can not only improve the image quality of the area of interest to the user, but also save processing resources of the terminal.
  • the terminal performing differentiated processing on the first target area and other areas in the first image includes: the terminal performs image processing on the first target area and other areas respectively by using different image processing models.
  • the function of the image processing model is an arbitrary function, for example, deshaking, denoising, deblurring, and the like.
  • the processing algorithm of the model for performing image processing on the first target area is more complicated, and consumes a lot of processing resources of the terminal, but the image processing effect is better.
  • the processing algorithm of the model that performs image processing on other areas is simpler, and consumes less processing resources on the terminal, but the image processing effect is slightly worse.
  • the terminal differentiates the second target area from other areas in the second image in the same manner as the first target area in the first image from other areas, and will not be repeated here.
  • the terminal processes the first target area in the first image and the second target area in the second image based on different processing parameters. That is, the terminal performs image processing on the first target area in the first image based on the third processing parameter; and performs image processing on the second target area in the second image based on the fourth processing parameter.
  • the third processing parameter and the fourth processing parameter include a noise reduction parameter, a deblurring parameter, a dejittering parameter, and the like.
  • the fourth processing parameter is at least partially different from the third processing parameter.
  • the noise reduction parameter in the fourth processing parameter is different from the noise reduction parameter in the third processing parameter.
  • the terminal determines the third processing parameter based on the image quality characteristics of the first target area in the first image, and determines the fourth processing parameter based on the image quality characteristics of the second target area in the second image. Examples are not limited to this.
  • the effect of image processing can be improved by separately processing the first target area and the second target area with different processing parameters.
  • the module for determining the target area in the image and the module for performing differentiated processing are different modules in the terminal.
  • the first target area in the first image and the first target area in the second image are determined. After the two target areas, distinguish and mark the first image and the second image, and store the corresponding relationship between the first image and the first target area, and the corresponding relationship between the second image and the second target area, which is convenient for differential processing
  • the module learns the location of the target region in each image, and thus differentiates each image.
  • the terminal generates a third image based on the differentially processed first image and the second image.
  • the terminal fuses the differentially processed first image and the second image to obtain the third image. Since in each image after differential processing, the image quality of the target area of interest to the user is relatively high, therefore, in the third image obtained by fusing the multiple images obtained after differential processing, the user is interested in more The image quality in each area is relatively high.
  • the terminal processes the third image after obtaining the third image, for example, adding special effects to the third image, which is not limited in this embodiment of the present application.
  • steps 304-306 are one of the implementations for the terminal to generate the third image based on the first image and the second image.
  • the terminal will obtain the first image and The second image is directly fused to obtain the third image.
  • the terminal generates a target video based on the currently processed image and the third image, and the sequence of the currently processed image in the target video is before the third image.
  • the terminal after determining multiple target areas in the currently processed image, performs image processing on the first target area and the second target area in the currently processed image based on different processing parameters, that is, The terminal performs image processing on the first target area in the currently processed image based on the first processing parameter; and performs image processing on the second target area in the currently processed image based on the second processing parameter.
  • the first processing parameter and the second processing parameter include a noise reduction parameter, a deblurring parameter, a dejittering parameter, and the like.
  • the second processing parameter is at least partially different from the first processing parameter.
  • the noise reduction parameters in the second processing parameters are different from the noise reduction parameters in the first processing parameters.
  • the terminal determines the first processing parameter based on the image quality characteristics of the first target area in the currently processed image, and determines the second processing parameter based on the image quality characteristics of the second target area in the currently processed image. The application embodiment does not limit this.
  • the required image processing parameters are also different, so The image processing effect of the current image processing can be improved by separately processing the first target area and the second target area with different processing parameters.
  • the currently processed image and the third image are two adjacent frames of images in the shooting target video.
  • the target video further includes other images, and the order of the other images in the target video is before the currently processed image.
  • the acquisition manner of the currently processed image is the same as that of the third image, that is, the currently processed image is acquired based on other images before the currently processed image in the target video.
  • the process of acquiring the currently processed image based on the other image in the target video is the same as the process of acquiring the third image based on the currently processed image.
  • the terminal after the terminal obtains the third image, it will also obtain the next frame image in the target video based on the third image, and the implementation method is the same as the process of obtaining the third image based on the currently processed image.
  • the shooting parameters of the next frame of image when shooting a video, the shooting parameters of the next frame of image will be determined based on the captured previous frame of image, so as to ensure the image quality of multiple target areas of interest to the user in each frame of image.
  • Fig. 4 is a schematic diagram of an image processing process.
  • the terminal turns on the camera, acquires an image, selects a target type, and determines multiple pixel points in the acquired image whose distribution characteristics conform to the object characteristics corresponding to the target type, so as to determine multiple target areas in the image.
  • the shooting parameters and processing parameters are updated based on the image quality characteristics of each target area, and the next frame of image is shot based on the updated shooting parameters, and so on to obtain multiple images.
  • the target area can be corrected in combination with the motion information of the camera.
  • FIG. 5 is a schematic diagram of multiple target areas in an image. Referring to FIG. 5 , there are 4 target areas in the figure, and each target area is rectangular in shape.
  • Fig. 6 is a schematic diagram of the image processing process. The image processing process involves the process of performing differentiated processing on multiple images. FIG. 6 illustrates this process by taking the execution subject as a target application in the terminal as an example. Referring to FIG.
  • the first module in the target application acquires a plurality of images (including the first image and the second image), based on the motion information of the camera, and the target type and target matching sent by the second module in the target application
  • the information of multiple pixels of object characteristics of the type determines the target region and other regions in each image. Segment each image to obtain image data of the target area and image data of other areas in each image, and send the image data of the target area and image data of other areas to the second module.
  • the second module performs differential processing on the image data of the target area in each image and the image data of other areas, the image data of the target area in each image and the image data of other areas are spliced, and then multiple images fusion.
  • the target application is any application in the terminal.
  • the first module is a module for determining the target area
  • the second module is a module for performing differentiation processing and image fusion.
  • the method provided in the embodiment of the present application can be applied in the multi-frame mode of the image acquisition device, such as DOL WDR (Digital Overlap Wide Dynamic Range, digital overlap wide dynamic range) mode, or HDR (High-Dynamic Range, high dynamic range) mode.
  • DOL WDR Digital Overlap Wide Dynamic Range, digital overlap wide dynamic range
  • HDR High-Dynamic Range, high dynamic range
  • the solution provided by the embodiment of the present application does not only focus on one area in the image when shooting an image, but focuses on multiple target areas in the first image that has been captured, and targets the first target in the multiple target areas
  • the image quality characteristics of the area and the second target area respectively determine shooting parameters, since the first shooting parameter is determined based on the image quality characteristics of the first target area, and the second shooting parameter is determined based on the image quality characteristics of the second target area, Therefore, in the first image captured based on the first shooting parameters, the image quality of the first target area is high, and in the second image captured based on the second shooting parameters, the image quality of the second target area is high, so based on In the third image generated from the first image and the second image, the image quality of the first target area and the second target area are relatively high, which ensures the image quality of multiple areas in the third image taken, and improves the quality of shooting. image quality.
  • FIG. 7 shows a structural block diagram of an image processing apparatus provided by an exemplary embodiment of the present application.
  • the image processing device is applied to a terminal, and the image processing device includes:
  • a shooting parameter determination module 702 configured to determine a first shooting parameter based on the image quality characteristics of the first target area among the multiple target areas; determine a second shooting parameter based on the image quality characteristics of the second target area among the multiple target areas; wherein, the second capture parameter is at least partially different from the first capture parameter;
  • the image acquisition module 703 is configured to acquire from the first image acquisition device a first image taken based on the first shooting parameters; wherein, the first image is later than the currently processed image in frame timing; A second image captured by shooting parameters; wherein, the second image is later than the first image in frame timing;
  • An image generation module 704 configured to generate a third image based on the first image and the second image.
  • the device further includes:
  • the first image processing module is configured to perform image processing on the first target area in the currently processed image based on the first processing parameters; perform image processing on the second target area in the currently processed image based on the second processing parameters; wherein, the second The processing parameter is at least partially different from the first processing parameter.
  • the device further includes:
  • the second image processing module is configured to perform image processing on the first target area in the first image based on the third processing parameter; perform image processing on the second target area in the second image based on the fourth processing parameter; wherein, the fourth The processing parameter is at least partially different from the third processing parameter.
  • the target area determination module 701 includes:
  • the first area determination unit is configured to identify the area where the target object in the currently processed image is located, and the target object is an object belonging to the target type;
  • the second area determination unit is configured to determine the identified area as the target area, or determine a circumscribed area of the identified area as the target area.
  • the first area determining unit includes:
  • the feature determining subunit is used to determine the object feature corresponding to the target type, and the object feature refers to the feature of the target object belonging to the target type;
  • the region determination subunit is configured to determine the region in the currently processed image that matches the feature of the object as the region where the target object is located.
  • the area determination subunit is configured to determine multiple pixels in the currently processed image based on object features, and the distribution characteristics of the multiple pixel points match the object features; the area formed by the multiple pixel points Determine the area where the target object is located.
  • the target area determination module 701 is configured to perform eye tracking through the second image acquisition device during the process of capturing the currently processed image, and determine the key point corresponding to the gaze point of the eyeball in the currently processed image , the shooting ranges of the first image acquisition device and the second image acquisition device are different; the area to which the key point belongs is determined as the target area.
  • the target area determination module 701 is configured to determine the area where the first object that the first image acquisition device focuses on in the currently processed image; The area where the second object of the plane is located; the area where the first object is located and the area where the second object is located are determined as target areas.
  • the target area determining module 701 is configured to determine the moving area in the currently processed image as the target area when the first image acquisition device is in a moving state during the process of capturing the currently processed image , the motion area is obtained by photographing an object in motion.
  • the device further includes:
  • the target type determination module is used to obtain the input target type; or, perform type identification on the currently processed image to obtain the target type.
  • the shooting parameter determination module 702 is configured to determine the brightness of the first target area; based on the brightness of the first target area, determine at least one of the aperture value or the exposure duration; wherein, the first target area The brightness of the first target area is positively correlated with the aperture value, and the brightness of the first target area is negatively correlated with the exposure time.
  • the image generating module 704 is configured to fuse the first image and the second image to obtain a third image.
  • the device further includes an area correction module, and the area correction module is configured to perform at least one of the following:
  • the device further includes a differential processing module, and the differential processing module is configured to perform at least one of the following:
  • the difference processing module is configured to process the first target area in the first image without processing other areas in the first image; and process the second target area in the second image The region is processed, while other regions in the second image are not processed.
  • the device further includes:
  • the video generating module is configured to generate a target video based on the currently processed image and the third image, and the sequence of the currently processed image in the target video is before the third image.
  • the solution provided by the embodiment of the present application does not only focus on one area in the image when shooting an image, but focuses on multiple target areas in the first image that has been captured, and targets the first target in the multiple target areas
  • the image quality characteristics of the area and the second target area respectively determine shooting parameters, since the first shooting parameter is determined based on the image quality characteristics of the first target area, and the second shooting parameter is determined based on the image quality characteristics of the second target area, Therefore, in the first image captured based on the first shooting parameters, the image quality of the first target area is high, and in the second image captured based on the second shooting parameters, the image quality of the second target area is high, so based on In the third image generated from the first image and the second image, the image quality of the first target area and the second target area are relatively high, which ensures the image quality of multiple areas in the third image taken, and improves the quality of shooting. image quality.
  • the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned function allocation can be completed by different functional modules according to the needs.
  • the internal structure of the terminal is divided into different functional modules to complete all or part of the functions described above.
  • the device and the method embodiment provided by the above embodiment belong to the same idea, and the specific implementation process thereof is detailed in the method embodiment, and will not be repeated here.
  • An embodiment of the present application provides a terminal, and the terminal includes a processor and a memory; the memory stores at least one instruction, and the at least one instruction is used to be executed by the processor to implement the image processing method provided by the above method embodiments.
  • the terminal 800 is a terminal capable of accessing a wireless local area network as a wireless station, such as a smart phone, a tablet computer, and a wearable device.
  • the terminal 800 in this application includes at least one or more of the following components: a processor 810 , a memory 820 and at least two wireless links 830 .
  • processor 810 includes one or more processing cores.
  • the processor 810 uses various interfaces and lines to connect various parts of the entire terminal 800, and executes various functions and processes of the terminal 800 by running or executing program codes stored in the memory 820 and calling data stored in the memory 820 data.
  • the processor 810 adopts at least one of Digital Signal Processing (Digital Signal Processing, DSP), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and Programmable Logic Array (Programmable Logic Array, PLA). A form of hardware to achieve.
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PLA Programmable Logic Array
  • the processor 810 can integrate one or more of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a neural network processor (Neural-network Processing Unit, NPU) and a modem, etc.
  • a central processing unit Central Processing Unit, CPU
  • an image processor Graphics Processing Unit, GPU
  • a neural network processor Neural-network Processing Unit, NPU
  • the CPU mainly handles the operating system, user interface and application programs, etc.
  • the GPU is used to render and draw the content that needs to be displayed on the display screen
  • the NPU is used to realize artificial intelligence (Artificial Intelligence, AI) functions
  • the modem is used to process wireless communication. It can be understood that, the above-mentioned modem may not be integrated into the processor 810, but may be realized by a single chip.
  • the processor 810 is used to control the working conditions of at least two wireless links 830.
  • the processor 810 is a processor integrated with a wireless fidelity (Wireless Fidelity, Wi-Fi) chip.
  • Wi-Fi chip is a chip with dual Wi-Fi processing capabilities.
  • the Wi-Fi chip is a Dual Band Dual Concurrent (DBDC) chip, or a Dual Band Simultaneous (DBS) chip.
  • DBDC Dual Band Dual Concurrent
  • DBS Dual Band Simultaneous
  • the memory 820 includes a random access memory (Random Access Memory, RAM), and in some embodiments, the memory 820 includes a read-only memory (Read-Only Memory, ROM). In some embodiments, the memory 820 includes a non-transitory computer-readable storage medium. Memory 820 may be used to store program codes.
  • RAM Random Access Memory
  • ROM Read-Only Memory
  • the memory 820 includes a non-transitory computer-readable storage medium. Memory 820 may be used to store program codes.
  • the memory 820 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playback function, an image playback function, etc.), Instructions and the like for realizing the following various method embodiments; the storage data area can store data created according to the use of the terminal 800 (such as audio data, phonebook) and the like.
  • the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playback function, an image playback function, etc.), Instructions and the like for realizing the following various method embodiments; the storage data area can store data created according to the use of the terminal 800 (such as audio data, phonebook) and the like.
  • the memory 820 stores different reception schemes of the wireless link 830 for receiving beacon frames. And, the identifier of the access node connected to the different wireless link 830, the identifier of the wireless link 830, and the like.
  • the at least two wireless links 830 are used to connect different access nodes (Access Point, AP). Receive downlink data sent by the AP.
  • the different access nodes are access nodes in the same router or access nodes in different routers.
  • the terminal 800 further includes a display screen.
  • a display is a display component for displaying a user interface.
  • the display screen is a display screen with a touch function. Through the touch function, the user can use any suitable object such as a finger or a touch pen to perform touch operations on the display screen.
  • the display screen is usually set on the front panel of the terminal 800 .
  • the display screen is designed as a full screen, a curved screen, a special-shaped screen, a double-sided screen or a folding screen.
  • the display screen is also designed as a combination of a full screen and a curved screen, a combination of a special-shaped screen and a curved screen, etc., which are not limited in this embodiment.
  • the structure of the terminal 800 shown in the above drawings does not constitute a limitation on the terminal 800, and the terminal 800 includes more or less components than those shown in the figure, or combines some components, or different component arrangements.
  • the terminal 800 also includes components such as a microphone, a loudspeaker, an input unit, a sensor, an audio circuit, a module, a power supply, and a Bluetooth module, which will not be repeated here.
  • the embodiment of the present application also provides a server, the server includes a processor and a memory; the memory stores at least one instruction, and the at least one instruction is used to be executed by the processor to implement the image processing method provided by the above method embodiments .
  • FIG. 9 shows a structural block diagram of a server provided by an exemplary embodiment of the present application.
  • the server 900 may have relatively large differences due to different configurations or performances, and may include one or more processors (Central Processing Units, CPU) 901 and one or more memory 902, wherein at least one computer program is stored in the memory 902, and the at least one computer program is loaded and executed by the processor 901 to realize the above-mentioned various method embodiments provided method.
  • the server may also have components such as a wired or wireless network interface, a keyboard, and an input and output interface for input and output, and the server may also include other components for realizing device functions, which will not be repeated here.
  • the present application also provides a computer-readable medium, the computer-readable medium stores at least one instruction, and the at least one instruction is loaded and executed by a processor to implement the image processing method shown in the above embodiments.
  • the present application also provides a computer program product, the computer program product stores at least one instruction, and the at least one instruction is loaded and executed by a processor to implement the image processing method shown in the above embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

一种图像处理方法、装置、终端及存储介质,属于计算机技术领域。该方法包括:确定当前处理图像中的多个目标区域(201);基于多个目标区域中第一目标区域的图像质量特征确定第一拍摄参数,基于多个目标区域中第二目标区域的图像质量特征确定第二拍摄参数(202);其中,第二拍摄参数至少部分地不同于第一拍摄参数;从第一图像采集装置获取基于第一拍摄参数拍摄的第一图像和基于第二拍摄参数拍摄的第二图像(203);其中,第一图像在帧时序上晚于当前处理图像,第二图像在帧时序上晚于第一图像;基于第一图像和第二图像产生第三图像(204)。采用上述方法、装置、终端及存储介质能够提高拍摄的第三图像的质量。

Description

图像处理方法、装置、终端及存储介质
本申请要求于2021年07月30日提交的申请号为202110871943.1、发明名称为“图像处理方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及计算机技术领域,特别涉及一种图像处理方法、装置、终端及存储介质。
背景技术
随着互联网中分享图像功能的广泛应用,人们对图像质量的需求日益提高,如何拍摄得到更高质量的图像成为当前的研究热点。
发明内容
本申请实施例提供了一种图像处理方法、装置、终端及存储介质,能够提高拍摄的图像质量。技术方案如下:
根据本申请实施例的一方面,提供了一种图像处理方法,由终端执行,所述方法包括:
确定当前处理图像中的多个目标区域;
基于所述多个目标区域中第一目标区域的图像质量特征确定第一拍摄参数;
基于所述多个目标区域中第二目标区域的图像质量特征确定第二拍摄参数;其中,所述第二拍摄参数至少部分地不同于所述第一拍摄参数;
从第一图像采集装置获取基于所述第一拍摄参数拍摄的第一图像;其中,所述第一图像在帧时序上晚于所述当前处理图像;
从所述第一图像采集装置获取基于所述第二拍摄参数拍摄的第二图像;其中,所述第二图像在帧时序上晚于所述第一图像;
基于所述第一图像和所述第二图像产生第三图像。
根据本申请实施例的另一方面,提供了一种图像处理装置,所述装置包括:
目标区域确定模块,用于确定当前处理图像中的多个目标区域;
拍摄参数确定模块,用于基于所述多个目标区域中第一目标区域的图像质量特征确定第一拍摄参数;基于所述多个目标区域中第二目标区域的图像质量特征确定第二拍摄参数;其中,所述第二拍摄参数至少部分地不同于所述第一拍摄参数;
图像获取模块,用于从第一图像采集装置获取基于所述第一拍摄参数拍摄的第一图像;其中,所述第一图像在帧时序上晚于所述当前处理图像;从所述第一图像采集装置获取基于所述第二拍摄参数拍摄的第二图像;其中,所述第二图像在帧时序上晚于所述第一图像;
图像产生模块,用于基于所述第一图像和所述第二图像产生第三图像。
根据本申请实施例的另一方面,提供了一种终端,所述终端包括处理器和存储器;所述存储器存储有至少一条指令,所述至少一条指令用于被所述处理器执行以实现如上述方面所述的图像处理方法。
根据本申请实施例的另一方面,提供了一种计算机可读存储介质,所述存储介质存储有 至少一条指令,所述至少一条指令用于被处理器执行以实现如上述方面所述的图像处理方法。
根据本申请实施例的另一方面,提供了一种计算机程序产品,该计算机程序产品存储有至少一条程序代码,所述至少一条程序代码由处理器加载并执行以实现上述方面所述的图像处理方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了本申请一个示例性实施例提供的一种实施环境的示意图;
图2示出了本申请一个示例性实施例提供的一种图像处理方法的流程图;
图3示出了本申请一个示例性实施例提供的一种图像处理方法的流程图;
图4示出了本申请一个示例性实施例提供的一种图像处理过程的示意图;
图5示出了本申请一个示例性实施例提供的一种图像中的多个目标区域的示意图;
图6示出了本申请一个示例性实施例提供的一种图像处理过程的示意图;
图7示出了本申请一个示例性实施例提供的一种图像处理装置的结构方框图;
图8示出了本申请一个示例性实施例提供的一种终端的结构方框图;
图9示出了本申请一个示例性实施例提供的一种服务器的结构方框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
本申请所使用的术语“第一”、“第二”、“第三”、“第四”等可在本文中用于描述各种概念,但除非特别说明,这些概念不受这些术语限制。这些术语仅用于将一个概念与另一个概念区分。举例来说,在不脱离本申请的范围的情况下,可以将第一目标区域称为目标区域,且类似地,可将第二目标区域称为第一目标区域。
本申请实施例中,提供有如下技术方案:
一种图像处理方法,由终端执行,所述方法包括:
确定当前处理图像中的多个目标区域;
基于所述多个目标区域中第一目标区域的图像质量特征确定第一拍摄参数;
基于所述多个目标区域中第二目标区域的图像质量特征确定第二拍摄参数;其中,所述第二拍摄参数至少部分地不同于所述第一拍摄参数;
从第一图像采集装置获取基于所述第一拍摄参数拍摄的第一图像;其中,所述第一图像在帧时序上晚于所述当前处理图像;
从所述第一图像采集装置获取基于所述第二拍摄参数拍摄的第二图像;其中,所述第二图像在帧时序上晚于所述第一图像;
基于所述第一图像和所述第二图像产生第三图像。
在一种可能的实现方式中,所述确定当前处理图像中的多个目标区域之后,所述方法还包括:
基于第一处理参数对所述当前处理图像中的所述第一目标区域进行图像处理;
基于第二处理参数对所述当前处理图像中的所述第二目标区域进行图像处理;
其中,所述第二处理参数至少部分地不同于所述第一处理参数。
在一种可能的实现方式中,所述基于所述第一图像和所述第二图像产生第三图像之前,所述方法还包括:
基于第三处理参数对所述第一图像中的第一目标区域进行图像处理;
基于第四处理参数对所述第二图像中的第二目标区域进行图像处理;
其中,所述第四处理参数至少部分地不同于所述第三处理参数。
在一种可能的实现方式中,所述确定当前处理图像中的多个目标区域,包括:
识别所述当前处理图像中的目标物体所在的区域,所述目标物体为属于目标类型的物体;
将识别出的区域确定为所述目标区域,或者将识别出的区域的外接区域确定为所述目标区域。
在一种可能的实现方式中,所述识别所述当前处理图像中的目标物体所在的区域,包括:
确定所述目标类型对应的物体特征,所述物体特征指代属于所述目标类型的目标物体的特征;
将所述当前处理图像中与所述物体特征匹配的区域确定为所述目标物体所在的区域。
在一种可能的实现方式中,所述将所述当前处理图像中与所述物体特征匹配的区域确定为所述目标物体所在的区域,包括:
基于所述物体特征确定所述当前处理图像中的多个像素点,所述多个像素点的分布特征与所述物体特征匹配;
将所述多个像素点构成的区域确定为所述目标物体所在的区域。
在一种可能的实现方式中,所述确定当前处理图像中的多个目标区域,包括:
在拍摄所述当前处理图像的过程中,通过第二图像采集装置进行眼球追踪,确定眼球的注视点在所述当前处理图像中对应的关键点,所述第一图像采集装置与所述第二图像采集装置的拍摄范围不同;
将所述关键点所属的区域确定为所述目标区域。
在一种可能的实现方式中,所述确定当前处理图像中的多个目标区域,包括:
确定所述当前处理图像中,所述第一图像采集装置对焦的第一物体所在的区域;
确定所述当前处理图像中,与所述第一物体处于同一焦平面的第二物体所在的区域;
将所述第一物体所在的区域和所述第二物体所在的区域确定为所述目标区域。
在一种可能的实现方式中,所述确定当前处理图像中的多个目标区域,包括:
在拍摄所述当前处理图像的过程中,所述第一图像采集装置处于运动状态的情况下,将所述当前处理图像中的运动区域确定为所述目标区域,所述运动区域是对处于运动状态的物体拍摄得到的。
在一种可能的实现方式中,所述识别所述当前处理图像中的目标物体所在的区域之前,所述方法还包括:
获取输入的所述目标类型;或者,
对所述当前处理图像进行类型识别,得到所述目标类型。
在一种可能的实现方式中,所述基于所述多个目标区域中第一目标区域的图像质量特征确定第一拍摄参数,包括:
确定所述第一目标区域的亮度;
基于所述第一目标区域的亮度,确定光圈值或曝光时长中的至少一个;
其中,所述第一目标区域的亮度与所述光圈值呈正相关关系,所述第一目标区域的亮度与所述曝光时长呈负相关关系。
在一种可能的实现方式中,所述基于所述第一图像和所述第二图像产生第三图像,包括:
将所述第一图像与所述第二图像进行融合,得到所述第三图像。
在一种可能的实现方式中,所述基于所述第一图像和所述第二图像产生第三图像之前,所述方法还包括以下至少一项:
基于所述第一目标区域在所述当前处理图像中的位置,确定所述第一图像中位置相同的第一目标区域;基于拍摄所述第一图像过程中所述第一图像采集装置的运动信息,对所述第一图像中的所述第一目标区域进行校正;
基于所述第二目标区域在所述当前处理图像中的位置,确定所述第二图像中位置相同的第二目标区域;基于拍摄所述第二图像过程中所述第一图像采集装置的运动信息,对所述第二图像中的所述第二目标区域进行校正。
在一种可能的实现方式中,所述基于所述第一图像和所述第二图像产生第三图像之前,所述方法还包括以下至少一项:
对所述第一图像中的所述第一目标区域与其他区域进行差异化处理;
对所述第二图像中的所述第二目标区域与其他区域进行差异化处理。
在一种可能的实现方式中,所述对所述第一图像中的所述第一目标区域与其他区域进行差异化处理,包括:
对所述第一图像中的所述第一目标区域进行处理,而不对所述第一图像中的其他区域进行处理;
所述对所述第二图像中的所述第二目标区域与其他区域进行差异化处理,包括:
对所述第二图像中的所述第二目标区域进行处理,而不对所述第二图像中的其他区域进行处理。
在一种可能的实现方式中,所述基于所述第一图像和所述第二图像产生第三图像之后,所述方法还包括:
基于所述当前处理图像和所述第三图像生成目标视频,所述当前处理图像在所述目标视频中的顺序在所述第三图像之前。
本申请实施例提供的方案,在拍摄图像时,不只关注于图像中的一个区域,而是着眼于已经拍摄得到的第一图像中的多个目标区域,针对多个目标区域中的第一目标区域和第二目标区域的图像质量特征分别确定拍摄参数,由于第一拍摄参数是基于第一目标区域的图像质量特征确定的,第二拍摄参数是基于第二目标区域的图像质量特征确定的,因此,基于第一拍摄参数拍摄得到的第一图像中,第一目标区域的图像质量较高,基于第二拍摄参数拍摄得到的第二图像中,第二目标区域的图像质量较高,从而基于第一图像和第二图像产生的第三图像中,第一目标区域和第二目标区域的图像质量都比较高,这样就保证了拍摄的第三图像中多个区域的图像质量,提高了拍摄的图像质量。
本申请实施例提供了一种图像处理方法,执行主体为终端,终端能够通过本申请提供的方法拍摄图像,得到多个目标区域质量高的图像。例如终端为手机、摄像机、台式电脑、笔记本电脑、平板电脑等多种类型的终端。可选地,终端中包括多个图像采集装置,例如,第一图像采集装置和第二图像采集装置,可选地,第一图像采集装置为终端的后置摄像头,可选地,第二图像采集装置为终端的前置摄像头。终端能够通过任一图像采集装置进行图像拍摄,其他的图像采集装置待用。
在一些实施例中,如图1所示,本申请实施例的实施环境包括服务器101和终端102,其中,终端102对当前所处的场景进行拍摄,得到当前处理图像,向服务器101发送该当前处理图像,服务器101确定当前处理图像中的多个目标区域,基于多个目标区域中第一目标区域的图像质量特征,确定第一拍摄参数,基于多个目标区域中第二目标区域的图像质量特征确定第二拍摄参数,向终端102发送第一拍摄参数和第二拍摄参数。终端102接收第一拍摄参数和第二拍摄参数,基于第一拍摄参数拍摄得到第一图像后,基于第二拍摄参数拍摄得到第二图像,向服务器101发送该第一图像和第二图像,服务器101基于第一图像和第二图像产生第三图像。
可选地,终端102上安装由服务器101提供服务的目标应用,终端102能够通过该目标 应用实现例如数据传输、消息交互等功能。可选地,目标应用为终端102操作系统中的目标应用,或者为第三方提供的目标应用。该目标应用具有拍摄图像和处理图像的功能,当然,该目标应用还能够具有其他功能,例如,分享图像、拍摄视频等功能,本申请实施例对此不做限制。可选地,该目标应用为短视频应用、拍照应用、购物应用、聊天应用或者其他应用。
本申请实施例提供的图像处理方法,能够应用在图像拍摄的场景下。例如,在拍摄图像时,为了保证图像中多个目标区域的质量,在拍摄得到当前处理图像后,通过本申请提供的方法,利用该图像来进行拍摄,得到多个目标区域的图像质量都较高的另一个图像。本申请实施例提供的图像处理方法,还能够应用在拍摄视频的场景下。例如,在拍摄视频时,通过本申请提供的方法,基于已拍摄得到图像来拍摄视频的下一帧图像,以保证下一帧图像中的多个目标区域的图像质量。
图2示出了本申请一个示例性实施例提供的一种图像处理方法的流程图,参见图2,该方法包括:
201、终端确定当前处理图像中的多个目标区域。
其中,当前处理图像是对当前所处的场景进行拍摄得到的。目标区域包括该场景中的物体。当前所处的场景为任意场景,例如,当前所处的场景为包括多种美食的室内场景、包括多个人物的室内场景、包括多种植物的室外场景等。
可选地,当前处理图像是对当前所处场景拍摄得到的原始图像,或者,当前处理图像是对该原始图像进行处理后的图像。当前处理图像为任意格式,例如,当前处理图像为RAW(原始)格式,本申请实施例对此不做限制。
目标区域为当前处理图像中的任意区域,例如,目标区域为当前处理图像中的ROI(region of interest,感兴趣的区域)。目标区域的形状为任意形状,例如,目标区域的形状为矩形、圆形、或者其他不规则形状,例如,目标区域的形状为目标区域中的物体的形状。目标区域包括场景中的物体,例如,场景为包括多种美食的室内场景,目标区域包括美食,又如,场景为包括多个人物的室内场景,目标区域包括人物。
202、终端基于多个目标区域中第一目标区域的图像质量特征确定第一拍摄参数,基于多个目标区域中第二目标区域的图像质量特征确定第二拍摄参数。
其中,第二拍摄参数至少部分地不同于第一拍摄参数。可选地,每个目标区域对应的拍摄参数指示第一图像采集装置对焦到该目标区域中的物体上。例如,第一拍摄参数指示将第一图像采集装置对焦到第一目标区域中的物体上,第二拍摄参数指示将第一图像采集装置对焦到第二目标区域中的物体上。可选地,目标区域对应的拍摄参数能够调整拍摄的图像中该目标区域的亮度。例如,第一目标区域对应的第一拍摄参数,能够调整拍摄得到的图像中该第一目标区域的亮度,第二目标区域对应的第二拍摄参数,能够调整拍摄得到的图像中该第二目标区域的亮度。当然,除了使图像采集装置对焦到目标区域中的物体以及调整目标区域的亮度以外,拍摄参数还能够实现其他效果,本申请实施例对此不做限制。
目标区域的图像质量特征指示目标区域的图像质量,即目标区域的图像质量特征是能够反映目标区域的图像质量的特征,例如,目标区域的图像质量特征包括目标区域的清晰度、亮度等。目标区域的清晰度高,亮度在阈值范围内,说明目标区域的图像质量高;目标区域的清晰度低,亮度不在阈值范围内,说明目标区域的图像质量低。
拍摄参数包括任意参数,例如,对焦参数、曝光时长、光圈值等,拍摄参数能够决定拍摄的图像的质量。在本申请实施例中,对于每个目标区域,针对该目标区域确定的拍摄参数,能够保证基于该拍摄参数拍摄的图像中,该目标区域的图像质量。
第一图像采集装置为终端中的任一图像采集装置,例如,第一图像采集装置为终端的后置摄像头,本申请实施例对此不做限制。
203、终端从第一图像采集装置获取基于第一拍摄参数拍摄的第一图像和基于第二拍摄参 数拍摄的第二图像。
其中,第一图像在帧时序上晚于当前处理图像,第二图像在帧时序上晚于第一图像。
需要说明的一点是,当前处理图像和第一图像中都包括第一目标区域,当前处理图像中的第一目标区域与第一图像中的第一目标区域是对同一物体拍摄得到的区域。当前处理图像和第二图像中都包括第二目标区域,当前处理图像中的第二目标区域与第二图像中的第二目标区域是对同一物体拍摄得到的区域。
204、终端基于第一图像和第二图像产生第三图像。
可选地,终端将第一图像和第二图像进行融合,得到第三图像。
需要说明的一点是,本申请实施例以当前处理图像包括第一目标区域和第二目标区域为例进行说明,在其他实施例中,当前处理图像还包括其他目标区域,例如,第三目标区域,这种情况下,终端基于第三目标区域的图像质量特征确定第三拍摄参数,从第一图像采集装置获取基于第三拍摄参数拍摄的第四图像,然后基于第一图像、第二图像和第四图像产生第三图像,其中,第四图像在帧时序上晚于第二图像。也就是说,终端会针对当前处理图像中的多个目标区域,分别确定对应的拍摄参数,基于确定的多个拍摄参数,依次拍摄图像,基于拍摄得到的多帧图像产生第三图像。本申请实施例对当前处理图像中的目标区域的数量不做限制。
本申请实施例提供的方案,在拍摄图像时,不只关注于图像中的一个区域,而是着眼于已经拍摄得到的第一图像中的多个目标区域,针对多个目标区域中的第一目标区域和第二目标区域的图像质量特征分别确定拍摄参数,由于第一拍摄参数是基于第一目标区域的图像质量特征确定的,第二拍摄参数是基于第二目标区域的图像质量特征确定的,因此,基于第一拍摄参数拍摄得到的第一图像中,第一目标区域的图像质量较高,基于第二拍摄参数拍摄得到的第二图像中,第二目标区域的图像质量较高,从而基于第一图像和第二图像产生的第三图像中,第一目标区域和第二目标区域的图像质量都比较高,这样就保证了拍摄的第三图像中多个区域的图像质量,提高了拍摄的图像质量。
图3示出了本申请一个示例性实施例提供的一种图像处理方法的流程图,该实施例提供了多种确定图像中的目标区域的方法。参见图3,该方法包括:
301、终端确定当前处理图像中的多个目标区域。
在一种可能的实现方式中,终端确定当前处理图像中的多个目标区域,包括:
终端识别当前处理图像中的目标物体所在的区域,目标物体为属于目标类型的物体;终端将识别出的区域确定为目标区域,或者将识别出的区域的外接区域确定为目标区域。可选地,该外接区域为终端识别出的区域的最小外接矩形内的区域,或者识别出的区域的最小外接圆形内的区域,或者识别出的区域的最小外接椭圆形内的区域等。需要说明的一点是,由于当前处理图像中可能包括多个目标物体所在的区域,因此,终端确定的目标区域可能是多个。
其中,目标类型用于指示当前处理图像中目标物体的类型。例如,目标类型用于指示当前处理图像中用户感兴趣的物体的类型。目标类型为任意类型,例如,目标类型为美食类型、人物类型、风景类型、建筑类型等,本申请实施例对此不做限制。
终端识别当前处理图像中的目标物体所在的区域之前,先获取目标类型。在一种可能的实现方式中,终端获取输入的目标类型。例如,在拍摄当前处理图像之前或者拍摄当前处理图像之后,终端显示类型选取界面,该类型选取界面中包括多个物体类型,终端获取从类型选取界面中选取的物体类型,将该物体类型确定为目标类型。例如,用户在拍摄图像之前,会在拍摄场景界面中选取拍摄场景,其中,拍摄场景界面中包括人像场景、美食场景等多个拍摄场景,用户能够从中选取拍摄场景,然后再进行拍摄。其中,拍摄场景界面即为类型选取界面,拍摄场景界面中的拍摄场景即为物体类型,例如,人像场景即表示人物类型,美食 场景即表示美食类型。
在本申请实施例中,由于目标类型是由用户输入的,因此,目标类型能够准确反映出用户感兴趣的目标物体,后续基于该目标物体所在的区域确定目标区域,能够保证获取的第三图像中,用户感兴趣的目标物体所在的区域的图像质量。
在另一种可能的实现方式中,终端对当前处理图像进行类型识别,得到目标类型。例如,终端对当前处理图像进行类型识别,得到当前处理图像的类型,将当前处理图像的类型确定为目标类型。例如,当前处理图像中大部分区域为草地,识别出的当前处理图像的类型为风景类型,终端将风景类型确定为目标类型。又如,当前处理图像中包括多个人物,识别出的当前处理图像的类型为人物类型,终端将人物类型确定为目标类型。可选地,终端通过图像类型识别模型对当前处理图像进行类型识别,得到当前处理图像的类型。其中,图像类型识别模型是通过包括图像以及图像类型标签的样本训练得到的,能够识别出任意图像的图像类型。当然终端也能够通过其他方式识别当前处理图像的类型,本申请实施例对此不做限制。
在本申请实施例中,由于当前处理图像是用户所拍摄的,因此当前处理图像本身能够反映出用户感兴趣的目标物体,因此,对当前处理图像进行类型识别,得到的目标类型能够代表用户感兴趣的目标物体的类型,后续基于属于该目标类型的目标物体所在的区域来确定目标区域,能够保证获取的第三图像中,用户感兴趣的目标物体所在的区域的图像质量。
在本申请实施例中,终端识别出目标物体所在的区域后,能够将识别出的区域直接确定为目标区域,这样,后续基于该目标区域进行图像处理所得到的第三图像中,目标物体所在的区域的图像质量能够得到保证。或者,终端识别出目标物体所在的区域后,将该区域的外接区域确定为目标区域,由于目标物体所在的区域一般是不规则形状,因此,后续对目标区域进行图像处理时,会增加终端的数据处理压力,由于外接区域的形状比目标物体所在的区域的形状更加规则,因此,将目标物体所在的区域的外接区域确定为目标区域,能够降低终端的数据处理压力。
在一种可能的实现方式中,终端识别当前处理图像中的目标物体所在的区域,包括:终端确定目标类型对应的物体特征,将当前处理图像中与该物体特征匹配的区域确定为目标物体所在的区域。由于当前处理图像中与目标物体的物体特征匹配的区域极大可能是目标物体所在的区域,因此,通过这种方式确定目标物体所在的区域的准确性高。
其中,物体特征指代属于目标类型的目标物体的特征。可选地,目标物体的特征是指目标物体的图像特征。例如,目标物体为人物,人物的图像特征包括具有眼睛、鼻子、嘴巴、四肢等结构的图像特征。例如,目标物体为建筑物,建筑物的图像特征包括具有多种规则的几何形状等形状的图像特征。可选地,终端中存储有目标类型与属于该目标类型的目标物体的物体特征之间的对应关系。例如,终端中存储有美食类型与美食特征的对应关系、人物类型与人物特征的对应关系、建筑类型与建筑特征的对应关系等。相应的,终端在获取目标类型后,基于终端中存储的目标类型与物体特征的对应关系,确定该目标类型对应的物体特征。
在一种可能的实现方式中,终端将当前处理图像中与物体特征匹配的区域确定为目标物体所在的区域,包括:终端基于物体特征确定当前处理图像中的多个像素点,该多个像素点的分布特征与物体特征匹配;终端将该多个像素点构成的区域确定为目标物体所在的区域。由于分布特征与物体特征匹配的多个像素点,极大可能是构成目标物体的区域的像素点,因此,通过这种方式确定目标物体所在的区域的准确性高。
在一种可能的实现方式中,终端将当前处理图像中与物体特征匹配的区域确定为目标物体所在的区域,包括:终端基于物体特征确定当前处理图像中的多个像素点,该多个像素点的分布特征和颜色特征与物体特征匹配;终端将该多个像素点构成的区域确定为目标物体所在的区域。由于像素点的颜色特征也能反映出该像素点对应的物体,因此,在确定当前处理图像中目标物体所在的区域时,结合像素点的颜色特征和分布特征来确定当前处理图像中与物体特征匹配的多个像素点,能够提高确定目标物体所在区域的准确性。
除了利用用户感兴趣的目标物体的目标类型来确定当前处理图像中的目标区域外,在一种可能的实现方式中,终端还能够利用用户在拍摄图像时眼球的注视点来确定当前处理图像中的目标区域。也即是,终端在拍摄当前处理图像的过程中,通过第二图像采集装置进行眼球追踪,确定眼球的注视点在当前处理图像中对应的关键点;将关键点所属的区域确定为目标区域。可选地,关键点所属的区域为该关键点对应的物体所在的区域。
其中,第一图像采集装置与第二图像采集装置的拍摄范围不同。例如,第一图像采集装置是终端的后置摄像头,第二图像采集装置是终端的前置摄像头。用户在使用终端的第一图像采集装置拍摄图像的过程中,终端的第二图像采集装置能够对该用户的眼球进行追踪,得到眼球的注视点的信息。然后,将该注视点映射到当前处理图像中,得到注视点在当前处理图像中对应的关键点。由于在拍摄当前处理图像的过程中,用户的眼球的注视点会发生变化,相应的,终端会得到多个注视点的信息,这种情况下,终端会在当前处理图像中确定出多个注视点对应的关键点,从而得到多个目标区域。
在本申请实施例中,由于用户在拍摄图像时,眼球的注视点对应的区域极大可能是用户感兴趣的区域,因此,通过进行眼球追踪,利用眼球的注视点确定当前处理图像中的目标区域,确定出的目标区域的准确性高。
在一种可能的实现方式中,终端还能够利用焦平面来确定当前处理图像中的目标区域。也即是,终端确定当前处理图像中,第一图像采集装置对焦的第一物体所在的区域;确定当前处理图像中,与第一物体处于同一焦平面的第二物体所在的区域;将第一物体所在的区域和第二物体所在的区域确定为目标区域。其中,第一图像采集装置的焦点所在的垂直于光轴的面就是焦平面。
在本申请实施例中,考虑到当前处理图像中第一图像采集装置对焦的第一物体极大可能是用户感兴趣的物体,并且用户感兴趣的多个物体往往位于同一焦平面上,因此,将第一物体所在的区域以及与第一物体处于同一焦平面的第二物体所在的区域确定为目标区域,丰富了目标区域的数量,并且能够保证目标区域的准确性。
终端确定当前处理图像中第一图像采集装置对焦的第一物体所在的区域之前,先确定第一图像采集装置对焦的第一物体。可选地,终端在拍摄当前处理图像的过程中,在拍摄界面中显示预览图像,终端响应于对预览图像中任一区域的触发操作,将该区域对应的物体确定为第一物体。其中,终端在拍摄当前处理图像的过程中,响应于对预览图像的任一区域的触发操作,会对焦到该区域对应的物体上,因此,用户触发的区域对应的物体即为第一图像采集装置对焦的第一物体。
终端确定当前处理图像中与第一物体处于同一焦平面的第二物体所在的区域之前,先确定与第一物体处于同一焦平面的第二物体。可选地,终端在拍摄当前处理图像的过程中,会检测场景中的物体与图像采集装置的距离,相应的,终端在确定了第一物体后,确定第一物体与第一图像采集装置的第一距离,将与第一图像采集装置的距离与该第一距离相近的物体确定为第二物体。可选地,第二物体为一个或多个,本申请实施例对此不做限制。
在一种可能的实现方式中,终端还能够利用图像采集装置的状态来确定当前处理图像中的目标区域。也即是,终端在拍摄当前处理图像的过程中第一图像采集装置处于运动状态的情况下,将当前处理图像中的运动区域确定为目标区域。其中,运动区域是对处于运动状态的物体拍摄得到的。
在本申请实施例中,考虑到在拍摄当前处理图像的过程中,如果第一图像采集装置处于运动状态,则极大可能是由于用户在跟踪拍摄处于运动状态的物体,这种情况下,将当前处理图像中处于运动状态的物体所在的区域确定为目标区域,目标区域极大可能是用户感兴趣的区域,保证了目标区域的准确性。
可选地,终端基于拍摄当前处理图像的过程中,终端的陀螺仪(gyro)信息,确定第一图像采集装置是否处于运动状态。
可选地,终端在拍摄当前处理图像的过程中,基于拍摄界面中的预览图像来确定当前处理图像中的运动区域。也即是,终端将预览图像中始终存在的物体确定为处于运动状态的物体,将当前处理图像中该物体所在的区域确定为运动区域。在本申请实施例中,考虑到用户在拍摄处于运动状态的物体的过程中,随着该物体的运动,终端会移动,拍摄界面中的预览图像中除了该物体外,其他物体会发生变化,也就是说,处于运动状态的物体始终存在于预览图像中,而预览图像中该物体周围的物体会更替,因此,将预览图像中始终存在的物体确定为处于运动状态的物体,将当前处理图像中该物体所在的区域确定为运动区域,能够保证确定的运动区域的准确性。
可选地,在拍摄视频的场景下,终端在拍摄当前处理图像之前,可能已经得到多个图像,这种情况下,终端将该多个图像中共同具有的物体确定为处于运动状态的物体。考虑到用户在拍摄处于运动状态的物体的视频的过程中,随着该物体的运动,已拍摄的多个图像中都会包括该物体,而多个图像中除了该物体外的其他物体会有所不同,因此,在视频拍摄的场景下,将已经拍摄得到的多个图像中共同具有的物体确定为处于运动状态的物体,将当前处理图像中该物体所在的区域确定为运动区域,能够保证确定的运动区域的准确性。
302、终端基于多个目标区域中第一目标区域的图像质量特征确定第一拍摄参数,基于多个目标区域中第二目标区域的图像质量特征确定第二拍摄参数。
可选地,终端确定出当前处理图像中的多个目标区域后,会针对每个目标区域的图像质量特征,确定一个拍摄参数。可选地,该拍摄参数用于使第一图像采集装置对焦到目标区域中的物体上。可选地,该拍摄参数还用于使拍摄的图像中该物体所在的目标区域的亮度处于阈值范围内,本申请实施例对此不做限制。
在一种可能的实现方式中,终端对于每个目标区域,基于该目标区域的图像质量特征确定拍摄参数,包括:终端对于每个目标区域,确定目标区域的亮度;基于目标区域的亮度,确定光圈值或曝光时长中的至少一个;其中,目标区域的亮度与光圈值呈正相关关系,目标区域的亮度与曝光时长呈负相关关系。例如,终端确定第一目标区域的亮度;基于第一目标区域的亮度,确定光圈值或曝光时长中的至少一个。其中,第一目标区域的亮度与光圈值呈正相关关系,第一目标区域的亮度与曝光时长呈负相关关系。再如,终端确定第二目标区域的亮度;基于第二目标区域的亮度,确定光圈值或曝光时长中的至少一个。其中,第二目标区域的亮度与光圈值呈正相关关系,第二目标区域的亮度与曝光时长呈负相关关系。
可选地,终端存储亮度与光圈值的对应关系以及亮度与曝光时长的对应关系中的至少一个。其中,曝光值以及曝光时长能够影响拍摄的图像的亮度。由于光圈值与按照该光圈值进行拍摄所得到的图像的亮度呈负相关关系,也就是说,光圈值越小,按照该光圈值所拍摄的图像的亮度越大。因此,在本申请实施例中,将目标区域的亮度与光圈值的关系设置为正相关关系,则在目标区域的亮度较大的情况下,确定的光圈值会较大,拍摄的图像中与目标区域匹配的区域的亮度则会降低,如此能够实现将第一图像中的第一目标区域和第二图像中的第二目标区域的亮度调整到阈值范围内,保证第一图像和第二图像中的目标区域的图像质量。另外,由于曝光时长与按照该曝光时长进行拍摄所得到的图像的亮度呈正相关关系,也就是说,曝光时长越长,按照该曝光时长所拍摄的图像的亮度越大。因此,在本申请实施例中,将目标区域的亮度与曝光时长的关系设置为负相关关系,则在目标区域的亮度较大的情况下,确定的曝光时长会较短,拍摄的图像中的目标区域的亮度则会降低,如此能够实现将第一图像中的第一目标区域和第二图像中的第二目标区域的亮度调整到阈值范围内,保证第一图像和第二图像中的目标区域的图像质量。
可选地,终端对于每个目标区域,基于该目标区域的图像质量特征确定拍摄参数,包括:终端对于每个目标区域,确定该目标区域中的物体与第一图像采集装置的距离,基于该距离确定对焦参数,该对焦参数用于使第一图像采集装置对焦到该目标区域中的物体上。可选地,终端在拍摄当前处理图像的过程中,检测场景中的各个物体与第一图像采集装置的距离,存 储各个物体与第一图像采集装置的距离。这样,终端在确定当前处理图像中的多个目标区域后,对于每个目标区域中的物体,能够从本地获取该物体与第一图像采集装置的距离,然后,基于该第一距离确定对焦参数。例如,终端确定该第一目标区域中的物体与第一图像采集装置的距离,基于该距离确定对焦参数,该对焦参数用于使第一图像采集装置对焦到该第一目标区域中的物体上。再如,终端确定该第二目标区域中的物体与第一图像采集装置的距离,基于该距离确定对焦参数,该对焦参数用于使第一图像采集装置对焦到该第二目标区域中的物体上。
303、终端从第一图像采集装置获取基于第一拍摄参数拍摄的第一图像和基于第二拍摄参数拍摄的第二图像。
终端针对当前处理图像中的每个目标区域确定一个拍摄参数,从第一图像采集装置获取基于每个拍摄参数拍摄的图像。
可选地,终端基于第一目标区域的图像质量特征确定第一拍摄参数,基于第二目标区域的图像质量特征确定第二拍摄参数之后,将第一拍摄参数和第二拍摄参数下发至第一图像采集装置,通过第一图像采集装置按照第一拍摄参数进行拍摄,得到第一图像,按照第二拍摄参数进行拍摄,得到第二图像。第一图像采集装置拍摄得到第一图像和第二图像之后,将第一图像和第二图像上传至终端。
304、终端确定第一图像中的第一目标区域和第二图像中的第二目标区域。
在一种可能的实现方式中,终端确定第一图像中的第一目标区域,包括:终端基于第一目标区域在当前处理图像中的位置,确定第一图像中位置相同的第一目标区域。终端确定第二图像中的第二目标区域,包括:终端基于第二目标区域在当前处理图像中的位置,确定第二图像中位置相同的第二目标区域。
在本申请实施例中,考虑到摄像机的拍摄帧率较高,即当前处理图像与第一图像和第二图像的拍摄时间间隔很短,因此,当前处理图像与第一图像以及第二图像的图像差异较小,因此,第一目标区域在当前处理图像中的位置与第一目标区域在第一图像中的位置差异较小,第二目标区域在当前处理图像中的位置与第二目标区域在第二图像中的位置差异较小,因此,直接基于第一目标区域在当前处理图像中的位置,确定第一图像中位置相同的第一目标区域,基于第二目标区域在当前处理图像中的位置,确定第二图像中位置相同的第二目标区域,在对第一图像和第二图像中的目标区域的准确性影响不大的情况下,能够提高确定第一图像和第二图像中的目标区域的效率。
在一种可能的实现方式中,终端基于第一目标区域在当前处理图像中的位置,确定第一图像中位置相同的第一目标区域,基于第二目标区域在当前处理图像中的位置,确定第二图像中位置相同的第二目标区域之后,基于拍摄第一图像过程中第一图像采集装置的运动信息,对该第一图像中的第一目标区域进行校正,基于拍摄第二图像过程中第一图像采集装置的运动信息,对该第二图像中的第二目标区域进行校正。可选地,第一图像采集装置的运动信息包括终端在拍摄图像的过程中生成的gyro信息。
在本申请实施例中,考虑到在拍摄第一图像和第二图像的过程中,第一图像采集装置可能会发生抖动,从而导致第一目标区域在当前处理图像中的位置与第一目标区域在第一图像中的位置差异较大,第二目标区域在当前处理图像中的位置与第二目标区域在第二图像中的位置差异较大,因此,基于拍摄第一图像过程中,第一图像采集装置的运行信息对第一图像中的第一目标区域校正,基于拍摄第二图像过程中,第一图像采集装置的运行信息对第二图像中的第二目标区域校正,能够提高第一图像和第二图像中的目标区域的准确性。
305、终端对第一图像中的第一目标区域与其他区域进行差异化处理,对第二图像中的第二目标区域与其他区域进行差异化处理。
在一种可能的实现方式中,终端对第一图像中的第一目标区域与其他区域进行差异化处理,包括:终端对第一图像中的第一目标区域进行处理,而不对第一图像中的其他区域进行 处理。终端对第二图像中的第二目标区域与其他区域进行差异化处理,包括:终端对第二图像中的第二目标区域进行处理,而不对第二图像中的其他区域进行处理。
可选地,终端对第一图像中的第一目标区域进行处理的实现方式包括:终端对该第一图像中的第一目标区域进行降噪,调整第一目标区域的图像的亮度等。可选地,终端对第二图像中的第二目标区域进行处理的实现方式包括:终端对第二图像中的第二目标区域进行降噪,调整第二目标区域的图像的亮度等,本申请实施例对此不做限制。
在本申请实施例中,由于第一图像中的第一目标区域以及第二图像中的第二目标区域是用户感兴趣的区域,因此对第一图像中的第一目标区域以及第二图像中第二目标区域进行处理,而不对第一图像和第二图像中的其他区域进行处理,既能够提高用户感兴趣的区域的图像质量,又能够节省终端的处理资源。
可选地,终端对第一图像中的第一目标区域与其他区域进行差异化处理包括:终端通过不同的图像处理模型分别对第一目标区域和其他区域进行图像处理。图像处理模型的功能为任意功能,例如,去抖动、降噪、去模糊等。其中,对第一目标区域进行图像处理的模型的处理算法更加复杂,对终端的处理资源消耗大,但图像处理效果更好。对其他区域进行图像处理的模型的处理算法更加简单,对终端的处理资源消耗小,但图像处理效果稍差。终端对第二图像中的第二目标区域与其他区域进行差异化处理与对第一图像中的第一目标区域与其他区域进行差异化处理的实现方式同理,此处不再赘述。
在本申请实施例中,考虑到目标区域和其他区域的重要程度不同,通过不同的图像处理模型来处理图像中的目标区域和其他区域,在提高图像的整体质量的同时,还能够节省终端的处理资源。
在一种可能的实现方式中,终端基于不同的处理参数对第一图像中的第一目标区域和第二图像中的第二目标区域进行处理。也即是,终端基于第三处理参数对第一图像中的第一目标区域进行图像处理;基于第四处理参数对第二图像中的第二目标区域进行图像处理。可选地,第三处理参数和第四处理参数包括降噪参数、去模糊参数、去抖动参数等。其中,第四处理参数至少部分地不同于第三处理参数。例如,第四处理参数中的降噪参数与第三处理参数中的降噪参数不同。可选地,终端基于第一图像中的第一目标区域的图像质量特征,确定第三处理参数,基于第二图像中的第二目标区域的图像质量特征,确定第四处理参数,本申请实施例对此不做限制。
在本申请实施例中,考虑到第一图像中的第一目标区域和第二图像中的第二目标区域的图像质量特征是不同的,而针对不同的图像质量特征,所需的图像处理参数也是不同的,因此,通过不同的处理参数分别对该第一目标区域和该第二目标区域进行处理,能够提高图像处理的效果。
可选地,确定图像中的目标区域的模块与进行差异化处理的模块是终端中的不同模块,相应的,步骤304中确定出第一图像中的第一目标区域和第二图像中的第二目标区域后,对第一图像和第二图像进行区分标记,并存储第一图像与第一目标区域的对应关系,以及第二图像与第二目标区域的对应关系,这样便于差异化处理的模块获知每个图像中的目标区域的位置,从而对每个图像进行差异化处理。
306、终端基于差异化处理后的第一图像和第二图像,产生第三图像。
可选地,终端将差异化处理后的第一图像和第二图像进行融合,得到第三图像。由于差异化处理后的每个图像中,用户感兴趣的目标区域的图像质量都比较高,因此,差异化处理后得到的多个图像进行融合所得到的第三图像中,用户感兴趣的多个区域的图像质量都比较高。
可选地,终端得到第三图像之后对第三图像进行处理,例如,在第三图像中添加特效等,本申请实施例对此不做限制。
需要说明的一点是,步骤304-306是终端基于第一图像和第二图像产生第三图像的其中 一种实现方式,在其他实施例中,在步骤303之后,终端将得到的第一图像和第二图像直接进行融合,得到第三图像。
307、终端基于当前处理图像和第三图像生成目标视频,当前处理图像在目标视频中的顺序在第三图像之前。
在一种可能的实现方式中,终端确定当前处理图像中的多个目标区域之后,基于不同的处理参数分别对当前处理图像中的第一目标区域和第二目标区域进行图像处理,也即是,终端基于第一处理参数对当前处理图像中的第一目标区域进行图像处理;基于第二处理参数对当前处理图像中的第二目标区域进行图像处理。可选地,第一处理参数和第二处理参数包括降噪参数、去模糊参数、去抖动参数等。其中,第二处理参数至少部分地不同于第一处理参数。例如,第二处理参数中的降噪参数与第一处理参数中的降噪参数不同。可选地,终端基于该当前处理图像中的第一目标区域的图像质量特征,确定第一处理参数,基于该当前处理图像中的第二目标区域的图像质量特征,确定第二处理参数,本申请实施例对此不做限制。
在本申请实施例中,考虑到当前处理图像中的第一目标区域和第二目标区域的图像质量特征是不同的,而针对不同的图像质量特征,所需的图像处理参数也是不同的,因此,通过不同的处理参数分别对该第一目标区域和该第二目标区域进行处理,能够提高当前图像处理的图像处理效果。
可选地,在拍摄视频的场景下,当前处理图像和第三图像是拍摄的目标视频中相邻的两帧图像。可选地,目标视频还包括其他图像,其他图像在目标视频中的顺序在当前处理图像之前。可选地,当前处理图像的获取方式和第三图像的获取方式同理,即当前处理图像是基于目标视频中,该当前处理图像之前的其他图像所获取的。其中,基于目标视频中的该其他图像获取当前处理图像的过程与基于当前处理图像获取第三图像的过程同理。可选地,终端得到第三图像之后,还会基于第三图像来获取目标视频中的下一帧图像,实现方式与基于当前处理图像获取第三图像的过程同理。在本申请实施例中,在拍摄视频时,会基于已拍摄的前一帧图像来确定下一帧图像的拍摄参数,以保证每一帧图像中用户感兴趣的多个目标区域的图像质量。
图4是图像处理过程的示意图。参考图4,终端开启摄像头,获取图像,选取目标类型,确定获取的图像中分布特征符合目标类型对应的物体特征的多个像素点,以确定图像中的多个目标区域。对该图像进行处理后,基于每个目标区域的图像质量特征更新拍摄参数和处理参数,基于更新后的拍摄参数拍摄下一帧图像,以此类推处理,得到多个图像。其中,在确定图像中的目标区域时,能够结合摄像头的运动信息,对目标区域进行校正。另外,得到多个图像后,还能够对多个图像进行后处理,例如,在图像中添加特效等。图5是图像中的多个目标区域的示意图。参考图5,图中有4个目标区域,每个目标区域的形状为矩形。图6为图像处理过程的示意图。该图像处理过程涉及对多个图像进行差异化处理的过程,图6以执行主体为终端中的目标应用为例,说明该过程。参考图6,目标应用中的第一模块在获取到多个图像(包括第一图像和第二图像)之后,基于摄像头的运动信息,以及目标应用中的第二模块发送的目标类型和符合目标类型的物体特征的多个像素点的信息,确定每个图像中的目标区域和其他区域。对每个图像进行分割,得到每个图像中目标区域的图像数据和其他区域的图像数据,将目标区域的图像数据和其他区域的图像数据发送给第二模块。第二模块对每个图像中的目标区域的图像数据和其他区域的图像数据进行差异化处理后,将每个图像中的目标区域的图像数据和其他区域的图像数据拼接,再将多个图像融合。其中,目标应用为终端中的任一应用。可选地,第一模块是用于确定目标区域的模块,可选地,第二模块是用于进行差异化处理及图像融合的模块。
需要说明的一点是,本申请实施例提供的方法能够应用在图像采集装置的多帧模式下,例如DOL WDR(Digital OverlapWide Dynamic Range,数字重叠宽动态)模式,或HDR(High-Dynamic Range,高动态范围)模式。
相关技术中,在拍摄图像时,通常会对焦到当前所处场景中的某一目标物体上,导致拍摄得到的图像中只有该目标物体所在的区域的图像质量较高,而其他区域的图像质量难以保证。
本申请实施例提供的方案,在拍摄图像时,不只关注于图像中的一个区域,而是着眼于已经拍摄得到的第一图像中的多个目标区域,针对多个目标区域中的第一目标区域和第二目标区域的图像质量特征分别确定拍摄参数,由于第一拍摄参数是基于第一目标区域的图像质量特征确定的,第二拍摄参数是基于第二目标区域的图像质量特征确定的,因此,基于第一拍摄参数拍摄得到的第一图像中,第一目标区域的图像质量较高,基于第二拍摄参数拍摄得到的第二图像中,第二目标区域的图像质量较高,从而基于第一图像和第二图像产生的第三图像中,第一目标区域和第二目标区域的图像质量都比较高,这样就保证了拍摄的第三图像中多个区域的图像质量,提高了拍摄的图像质量。
下述为本申请的装置实施例,能够用于执行本申请的方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请的方法实施例。
请参考图7,其示出了本申请一个示例性实施例提供的图像处理装置的结构方框图。该图像处理装置应用于终端,该图像处理装置包括:
目标区域确定模块701,用于确定当前处理图像中的多个目标区域;
拍摄参数确定模块702,用于基于多个目标区域中第一目标区域的图像质量特征确定第一拍摄参数;基于多个目标区域中第二目标区域的图像质量特征确定第二拍摄参数;其中,第二拍摄参数至少部分地不同于第一拍摄参数;
图像获取模块703,用于从第一图像采集装置获取基于第一拍摄参数拍摄的第一图像;其中,第一图像在帧时序上晚于当前处理图像;从第一图像采集装置获取基于第二拍摄参数拍摄的第二图像;其中,第二图像在帧时序上晚于第一图像;
图像产生模块704,用于基于第一图像和第二图像产生第三图像。
在一种可能的实现方式中,装置还包括:
第一图像处理模块,用于基于第一处理参数对当前处理图像中的第一目标区域进行图像处理;基于第二处理参数对当前处理图像中的第二目标区域进行图像处理;其中,第二处理参数至少部分地不同于第一处理参数。
在一种可能的实现方式中,装置还包括:
第二图像处理模块,用于基于第三处理参数对第一图像中的第一目标区域进行图像处理;基于第四处理参数对第二图像中的第二目标区域进行图像处理;其中,第四处理参数至少部分地不同于第三处理参数。
在一种可能的实现方式中,目标区域确定模块701,包括:
第一区域确定单元,用于识别当前处理图像中的目标物体所在的区域,目标物体为属于目标类型的物体;
第二区域确定单元,用于将识别出的区域确定为目标区域,或者将识别出的区域的外接区域确定为目标区域。
在一种可能的实现方式中,第一区域确定单元,包括:
特征确定子单元,用于确定目标类型对应的物体特征,物体特征指代属于目标类型的目标物体的特征;
区域确定子单元,用于将当前处理图像中与物体特征匹配的区域确定为目标物体所在的区域。
在一种可能的实现方式中,区域确定子单元,用于基于物体特征确定当前处理图像中的多个像素点,多个像素点的分布特征与物体特征匹配;将多个像素点构成的区域确定为目标物体所在的区域。
在一种可能的实现方式中,目标区域确定模块701,用于在拍摄当前处理图像的过程中,通过第二图像采集装置进行眼球追踪,确定眼球的注视点在当前处理图像中对应的关键点,第一图像采集装置与第二图像采集装置的拍摄范围不同;将关键点所属的区域确定为目标区域。
在一种可能的实现方式中,目标区域确定模块701,用于确定当前处理图像中,第一图像采集装置对焦的第一物体所在的区域;确定当前处理图像中,与第一物体处于同一焦平面的第二物体所在的区域;将第一物体所在的区域和第二物体所在的区域确定为目标区域。
在一种可能的实现方式中,目标区域确定模块701,用于在拍摄当前处理图像的过程中,第一图像采集装置处于运动状态的情况下,将当前处理图像中的运动区域确定为目标区域,运动区域是对处于运动状态的物体拍摄得到的。
在一种可能的实现方式中,装置还包括:
目标类型确定模块,用于获取输入的目标类型;或者,对当前处理图像进行类型识别,得到目标类型。
在一种可能的实现方式中,拍摄参数确定模块702,用于确定第一目标区域的亮度;基于第一目标区域的亮度,确定光圈值或曝光时长中的至少一个;其中,第一目标区域的亮度与光圈值呈正相关关系,第一目标区域的亮度与曝光时长呈负相关关系。
在一种可能的实现方式中,图像产生模块704,用于将第一图像与第二图像进行融合,得到第三图像。
在一种可能的实现方式中,装置还包括区域校正模块,区域校正模块用于执行以下至少一项:
基于第一目标区域在当前处理图像中的位置,确定第一图像中位置相同的第一目标区域;基于拍摄第一图像过程中第一图像采集装置的运动信息,对第一图像中的第一目标区域进行校正;
基于第二目标区域在当前处理图像中的位置,确定第二图像中位置相同的第二目标区域;基于拍摄第二图像过程中第一图像采集装置的运动信息,对第二图像中的第二目标区域进行校正。
在一种可能的实现方式中,装置还包括差异化处理模块,差异化处理模块用于执行以下至少一项:
对第一图像中的第一目标区域与其他区域进行差异化处理;
对第二图像中的第二目标区域与其他区域进行差异化处理。
在一种可能的实现方式中,差异化处理模块,用于对第一图像中的第一目标区域进行处理,而不对第一图像中的其他区域进行处理;对第二图像中的第二目标区域进行处理,而不对第二图像中的其他区域进行处理。
在一种可能的实现方式中,装置还包括:
视频生成模块,用于基于当前处理图像和第三图像生成目标视频,当前处理图像在目标视频中的顺序在第三图像之前。
本申请实施例提供的方案,在拍摄图像时,不只关注于图像中的一个区域,而是着眼于已经拍摄得到的第一图像中的多个目标区域,针对多个目标区域中的第一目标区域和第二目标区域的图像质量特征分别确定拍摄参数,由于第一拍摄参数是基于第一目标区域的图像质量特征确定的,第二拍摄参数是基于第二目标区域的图像质量特征确定的,因此,基于第一拍摄参数拍摄得到的第一图像中,第一目标区域的图像质量较高,基于第二拍摄参数拍摄得到的第二图像中,第二目标区域的图像质量较高,从而基于第一图像和第二图像产生的第三图像中,第一目标区域和第二目标区域的图像质量都比较高,这样就保证了拍摄的第三图像中多个区域的图像质量,提高了拍摄的图像质量。
需要说明的是,上述实施例提供的装置,在实现其功能时,仅以上述各功能模块的划分 进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将终端的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的装置与方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
本申请实施例提供了一种终端,该终端包括处理器和存储器;该存储器存储有至少一条指令,该至少一条指令用于被处理器执行以实现如上述各个方法实施例提供的图像处理方法。
请参考图8,其示出了本申请一个示例性实施例提供的终端的结构方框图。在一些实施例中,终端800是智能手机、平板电脑、可穿戴设备等能够作为无线站点接入无线局域网的终端。本申请中的终端800至少包括一个或多个以下部件:处理器810、存储器820和至少两个无线链路830。
在一些实施例中,处理器810包括一个或者多个处理核心。处理器810利用各种接口和线路连接整个终端800内的各个部分,通过运行或执行存储在存储器820内的程序代码,以及调用存储在存储器820内的数据,执行终端800的各种功能和处理数据。在一些实施例中,处理器810采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器810能集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)、神经网络处理器(Neural-network Processing Unit,NPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示屏所需要显示的内容的渲染和绘制;NPU用于实现人工智能(Artificial Intelligence,AI)功能;调制解调器用于处理无线通信。能够理解的是,上述调制解调器也能不集成到处理器810中,单独通过一块芯片进行实现。
在一些实施例中,该处理器810用于控制至少两个无线链路830的工作状况,相应的,该处理器810为集成了无线保真(Wireless Fidelity,Wi-Fi)芯片的处理器。其中,该Wi-Fi芯片为具有双Wi-Fi处理能力的芯片。例如,该Wi-Fi芯片为双频双发(Dual Band Dual Concurrent,DBDC)芯片,或者,双频同步(Dual Band Simultaneous,DBS)芯片等。
在一些实施例中,存储器820包括随机存储器(Random Access Memory,RAM),在一些实施例中,存储器820包括只读存储器(Read-Only Memory,ROM)。在一些实施例中,该存储器820包括非瞬时性计算机可读介质(non-transitory computer-readable storage medium)。存储器820可用于存储程序代码。存储器820可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等;存储数据区可存储根据终端800的使用所创建的数据(比如音频数据、电话本)等。
在一些实施例中,存储器820中存储有不同的无线链路830的接收信标帧的接收方案。以及,不同的无线链路830连接的接入节点的标识、无线链路830的标识等。
该至少两个无线链路830用于连接不同的接入节点(Access Point,AP)。接收AP下发的下行数据。其中,该不同的接入节点为同一路由器中的接入节点或者不同路由器中的接入节点。
在一些实施例中,终端800中还包括显示屏。显示屏是用于显示用户界面的显示组件。在一些实施例中,该显示屏为具有触控功能的显示屏,通过触控功能,用户可以使用手指、触摸笔等任何适合的物体在显示屏上进行触控操作。在一些实施例中,显示屏通常设置在终端800的前面板。在一些实施例中,显示屏被设计成为全面屏、曲面屏、异型屏、双面屏或折叠屏。在一些实施例中,显示屏还被设计成为全面屏与曲面屏的结合,异型屏与曲面屏的结合等,本实施例对此不加以限定。
除此之外,本领域技术人员能够理解,上述附图所示出的终端800的结构并不构成对终 端800的限定,终端800包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。比如,终端800中还包括麦克风、扬声器、输入单元、传感器、音频电路、模块、电源、蓝牙模块等部件,在此不再赘述。
本申请实施例还提供了一种服务器,该服务器包括处理器和存储器;该存储器存储有至少一条指令,该至少一条指令用于被处理器执行以实现如上述各个方法实施例提供的图像处理方法。
请参考图9,其示出了本申请一个示例性实施例提供的服务器的结构方框图,该服务器900可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(Central Processing Units,CPU)901和一个或一个以上的存储器902,其中,所述存储器902中存储有至少一条计算机程序,所述至少一条计算机程序由所述处理器901加载并执行以实现上述各个方法实施例提供的方法。当然,该服务器还可以具有有线或无线网络接口、键盘以及输入输出接口等部件,以便进行输入输出,该服务器还可以包括其他用于实现设备功能的部件,在此不做赘述。
本申请还提供一种计算机可读介质,该计算机可读介质存储有至少一条指令,该至少一条指令由处理器加载并执行以实现如上各个实施例示出的图像处理方法。
本申请还提供了一种计算机程序产品,该计算机程序产品存储有至少一条指令,该至少一条指令由处理器加载并执行以实现如上各个实施例示出的图像处理方法。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例的图像处理方法中全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种图像处理方法,由终端执行,所述方法包括:
    确定当前处理图像中的多个目标区域;
    基于所述多个目标区域中第一目标区域的图像质量特征确定第一拍摄参数;
    基于所述多个目标区域中第二目标区域的图像质量特征确定第二拍摄参数;其中,所述第二拍摄参数至少部分地不同于所述第一拍摄参数;
    从第一图像采集装置获取基于所述第一拍摄参数拍摄的第一图像;其中,所述第一图像在帧时序上晚于所述当前处理图像;
    从所述第一图像采集装置获取基于所述第二拍摄参数拍摄的第二图像;其中,所述第二图像在帧时序上晚于所述第一图像;
    基于所述第一图像和所述第二图像产生第三图像。
  2. 根据权利要求1所述的方法,其中,所述确定当前处理图像中的多个目标区域之后,所述方法还包括:
    基于第一处理参数对所述当前处理图像中的所述第一目标区域进行图像处理;
    基于第二处理参数对所述当前处理图像中的所述第二目标区域进行图像处理;
    其中,所述第二处理参数至少部分地不同于所述第一处理参数。
  3. 根据权利要求1所述的方法,其中,所述基于所述第一图像和所述第二图像产生第三图像之前,所述方法还包括:
    基于第三处理参数对所述第一图像中的所述第一目标区域进行图像处理;
    基于第四处理参数对所述第二图像中的所述第二目标区域进行图像处理;
    其中,所述第四处理参数至少部分地不同于所述第三处理参数。
  4. 根据权利要求1所述的方法,其中,所述确定当前处理图像中的多个目标区域,包括:
    识别所述当前处理图像中的目标物体所在的区域,所述目标物体为属于目标类型的物体;
    将识别出的区域确定为所述目标区域,或者将识别出的区域的外接区域确定为所述目标区域。
  5. 根据权利要求4所述的方法,其中,所述识别所述当前处理图像中的目标物体所在的区域,包括:
    确定所述目标类型对应的物体特征,所述物体特征指代属于所述目标类型的目标物体的特征;
    将所述当前处理图像中与所述物体特征匹配的区域确定为所述目标物体所在的区域。
  6. 根据权利要求5所述的方法,其中,所述将所述当前处理图像中与所述物体特征匹配的区域确定为所述目标物体所在的区域,包括:
    基于所述物体特征确定所述当前处理图像中的多个像素点,所述多个像素点的分布特征与所述物体特征匹配;
    将所述多个像素点构成的区域确定为所述目标物体所在的区域。
  7. 根据权利要求1所述的方法,其中,所述确定当前处理图像中的多个目标区域,包括:
    在拍摄所述当前处理图像的过程中,通过第二图像采集装置进行眼球追踪,确定眼球的注视点在所述当前处理图像中对应的关键点,所述第一图像采集装置与所述第二图像采集装 置的拍摄范围不同;
    将所述关键点所属的区域确定为所述目标区域。
  8. 根据权利要求1所述的方法,其中,所述确定当前处理图像中的多个目标区域,包括:
    确定所述当前处理图像中,所述第一图像采集装置对焦的第一物体所在的区域;
    确定所述当前处理图像中,与所述第一物体处于同一焦平面的第二物体所在的区域;
    将所述第一物体所在的区域和所述第二物体所在的区域确定为所述目标区域。
  9. 根据权利要求1所述的方法,其中,所述确定当前处理图像中的多个目标区域,包括:
    在拍摄所述当前处理图像的过程中,所述第一图像采集装置处于运动状态的情况下,将所述当前处理图像中的运动区域确定为所述目标区域,所述运动区域是对处于运动状态的物体拍摄得到的。
  10. 根据权利要求4所述的方法,其中,所述识别所述当前处理图像中的目标物体所在的区域之前,所述方法还包括:
    获取输入的所述目标类型;或者,
    对所述当前处理图像进行类型识别,得到所述目标类型。
  11. 根据权利要求1-10任一项所述的方法,其中,所述基于所述多个目标区域中第一目标区域的图像质量特征确定第一拍摄参数,包括:
    确定所述第一目标区域的亮度;
    基于所述第一目标区域的亮度,确定光圈值或曝光时长中的至少一个;
    其中,所述第一目标区域的亮度与所述光圈值呈正相关关系,所述第一目标区域的亮度与所述曝光时长呈负相关关系。
  12. 根据权利要求1-10任一项所述的方法,其中,所述基于所述第一图像和所述第二图像产生第三图像,包括:
    将所述第一图像与所述第二图像进行融合,得到所述第三图像。
  13. 根据权利要求1-10任一项所述的方法,其中,所述基于所述第一图像和所述第二图像产生第三图像之前,所述方法还包括以下至少一项:
    基于所述第一目标区域在所述当前处理图像中的位置,确定所述第一图像中位置相同的第一目标区域;基于拍摄所述第一图像过程中所述第一图像采集装置的运动信息,对所述第一图像中的所述第一目标区域进行校正;
    基于所述第二目标区域在所述当前处理图像中的位置,确定所述第二图像中位置相同的第二目标区域;基于拍摄所述第二图像过程中所述第一图像采集装置的运动信息,对所述第二图像中的所述第二目标区域进行校正。
  14. 根据权利要求1-10任一项所述的方法,其中,所述基于所述第一图像和所述第二图像产生第三图像之前,所述方法还包括以下至少一项:
    对所述第一图像中的所述第一目标区域与其他区域进行差异化处理;
    对所述第二图像中的所述第二目标区域与其他区域进行差异化处理。
  15. 根据权利要求14所述的方法,其中,所述对所述第一图像中的所述第一目标区域与其他区域进行差异化处理,包括:
    对所述第一图像中的所述第一目标区域进行处理,而不对所述第一图像中的其他区域进 行处理;
    所述对所述第二图像中的所述第二目标区域与其他区域进行差异化处理,包括:
    对所述第二图像中的所述第二目标区域进行处理,而不对所述第二图像中的其他区域进行处理。
  16. 根据权利要求1-10任一项所述的方法,其中,所述基于所述第一图像和所述第二图像产生第三图像之后,所述方法还包括:
    基于所述当前处理图像和所述第三图像生成目标视频,所述当前处理图像在所述目标视频中的顺序在所述第三图像之前。
  17. 一种图像处理装置,所述装置包括:
    目标区域确定模块,用于确定当前处理图像中的多个目标区域;
    拍摄参数确定模块,用于基于所述多个目标区域中第一目标区域的图像质量特征确定第一拍摄参数;基于所述多个目标区域中第二目标区域的图像质量特征确定第二拍摄参数;其中,所述第二拍摄参数至少部分地不同于所述第一拍摄参数;
    图像获取模块,用于从第一图像采集装置获取基于所述第一拍摄参数拍摄的第一图像;其中,所述第一图像在帧时序上晚于所述当前处理图像;从所述第一图像采集装置获取基于所述第二拍摄参数拍摄的第二图像;其中,所述第二图像在帧时序上晚于所述第一图像;
    图像产生模块,用于基于所述第一图像和所述第二图像产生第三图像。
  18. 根据权利要求17所述的装置,其中,所述装置还包括:
    第一图像处理模块,用于基于第一处理参数对所述当前处理图像中的所述第一目标区域进行图像处理;基于第二处理参数对所述当前处理图像中的所述第二目标区域进行图像处理;其中,所述第二处理参数至少部分地不同于所述第一处理参数。
  19. 一种终端,所述终端包括处理器和存储器;所述存储器存储有至少一条指令,所述至少一条指令用于被所述处理器执行以实现如权利要求1至16任一所述的图像处理方法。
  20. 一种计算机可读存储介质,所述存储介质存储有至少一条指令,所述至少一条指令用于被处理器执行以实现如权利要求1至16任一所述的图像处理方法。
PCT/CN2022/097953 2021-07-30 2022-06-09 图像处理方法、装置、终端及存储介质 WO2023005450A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110871943.1 2021-07-30
CN202110871943.1A CN115696019A (zh) 2021-07-30 2021-07-30 图像处理方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023005450A1 true WO2023005450A1 (zh) 2023-02-02

Family

ID=85058007

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/097953 WO2023005450A1 (zh) 2021-07-30 2022-06-09 图像处理方法、装置、终端及存储介质

Country Status (2)

Country Link
CN (1) CN115696019A (zh)
WO (1) WO2023005450A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898135A (zh) * 2015-11-15 2016-08-24 乐视移动智能信息技术(北京)有限公司 相机成像方法及相机装置
JP2017049311A (ja) * 2015-08-31 2017-03-09 沖電気工業株式会社 情報処理装置、情報処理方法及びプログラム
CN107426490A (zh) * 2017-05-16 2017-12-01 深圳市金立通信设备有限公司 一种拍照方法及终端
CN110225248A (zh) * 2019-05-29 2019-09-10 Oppo广东移动通信有限公司 图像采集方法和装置、电子设备、计算机可读存储介质
CN110766637A (zh) * 2019-10-30 2020-02-07 北京金山云网络技术有限公司 一种视频处理方法、处理装置、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017049311A (ja) * 2015-08-31 2017-03-09 沖電気工業株式会社 情報処理装置、情報処理方法及びプログラム
CN105898135A (zh) * 2015-11-15 2016-08-24 乐视移动智能信息技术(北京)有限公司 相机成像方法及相机装置
CN107426490A (zh) * 2017-05-16 2017-12-01 深圳市金立通信设备有限公司 一种拍照方法及终端
CN110225248A (zh) * 2019-05-29 2019-09-10 Oppo广东移动通信有限公司 图像采集方法和装置、电子设备、计算机可读存储介质
CN110766637A (zh) * 2019-10-30 2020-02-07 北京金山云网络技术有限公司 一种视频处理方法、处理装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN115696019A (zh) 2023-02-03

Similar Documents

Publication Publication Date Title
WO2021051995A1 (zh) 拍照方法及终端
CN106937039B (zh) 一种基于双摄像头的成像方法、移动终端及存储介质
US9065967B2 (en) Method and apparatus for providing device angle image correction
US11158027B2 (en) Image capturing method and apparatus, and terminal
US20230132407A1 (en) Method and device of video virtual background image processing and computer apparatus
JP7152598B2 (ja) 画像処理方法及び装置、電子機器並びに記憶媒体
US11895567B2 (en) Lending of local processing capability between connected terminals
CN107948505B (zh) 一种全景拍摄方法及移动终端
CN108200337B (zh) 拍照处理的方法、装置、终端及存储介质
US11310443B2 (en) Video processing method, apparatus and storage medium
CN112995467A (zh) 图像处理方法、移动终端及存储介质
CN112184722B (zh) 图像处理方法、终端及计算机存储介质
US20180249072A1 (en) Method and device for image processing
CN107689029A (zh) 图像处理方法、移动终端和计算机可读存储介质
CN113259583A (zh) 一种图像处理方法、装置、终端及存储介质
CN111866388A (zh) 一种多重曝光拍摄方法、设备及计算机可读存储介质
US20230247293A1 (en) Multi-lens video recording method and related device
WO2023005450A1 (zh) 图像处理方法、装置、终端及存储介质
CN107493431A (zh) 一种图像拍摄合成方法、终端及计算机可读存储介质
US11245840B2 (en) Method and system for dynamically adjusting camera shots
CN114143471A (zh) 图像处理方法、系统、移动终端及计算机可读存储介质
CN112188102A (zh) 拍照方法、移动终端及存储介质
CN114390191A (zh) 录像方法、电子设备及存储介质
WO2023036313A1 (zh) 图像拍摄方法、装置、计算机设备及存储介质
WO2022228089A1 (zh) 一种收音方法、装置及相关电子设备

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22848059

Country of ref document: EP

Kind code of ref document: A1