WO2022227916A1 - 图像处理方法、图像处理器、电子设备及存储介质 - Google Patents

图像处理方法、图像处理器、电子设备及存储介质 Download PDF

Info

Publication number
WO2022227916A1
WO2022227916A1 PCT/CN2022/081493 CN2022081493W WO2022227916A1 WO 2022227916 A1 WO2022227916 A1 WO 2022227916A1 CN 2022081493 W CN2022081493 W CN 2022081493W WO 2022227916 A1 WO2022227916 A1 WO 2022227916A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene image
image
scene
motion vector
relative motion
Prior art date
Application number
PCT/CN2022/081493
Other languages
English (en)
French (fr)
Inventor
朱文波
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2022227916A1 publication Critical patent/WO2022227916A1/zh

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Definitions

  • the present application belongs to the technical field of electronic devices, and in particular, relates to an image processing method, an image processor, an electronic device and a storage medium.
  • Electronic devices such as mobile phones, tablet computers and other electronic devices are usually equipped with cameras, so as to provide users with a photographing function, so that users can record what happens around them and the scenery they see anytime and anywhere through these electronic devices.
  • cameras so as to provide users with a photographing function, so that users can record what happens around them and the scenery they see anytime and anywhere through these electronic devices.
  • noises due to the hardware of the electronic device itself, there are usually noises in the image captured by the electronic device, and the existence of these noises will affect the quality of the image. Therefore, it is necessary to perform noise reduction processing on images captured by electronic devices.
  • Embodiments of the present application provide an image processing method, an image processor, an electronic device, and a storage medium, which can perform noise reduction processing on an image captured by the electronic device to make the image clearer.
  • the present application discloses an image processing method, comprising:
  • a noise reduction process is performed on the current scene image and the aligned scene image to obtain a noise reduction scene image of the current scene image.
  • the application also discloses an image processor, comprising:
  • a data interface unit used for acquiring the current scene image and the historical scene image of the shooting scene of the electronic device
  • a data processing unit for acquiring the relative motion vector of the object area in the shooting scene in the current scene image and the historical scene image
  • the current scene image, the historical scene image and the relative motion vector are transmitted to the application processor, so that the application processor generates an aligned scene image aligned with the current scene image according to the relative motion vector and the historical scene image, and compares the current scene image and the aligned scene.
  • the image is subjected to synthetic noise reduction processing to obtain a noise reduction scene image of the current scene image.
  • the application also discloses an electronic device, comprising:
  • a camera used to collect scene images of the shooting scene
  • an image processor for acquiring the current scene image and the historical scene image of the shooting scene collected by the camera; and acquiring the relative motion vector of the object area in the shooting scene in the current scene image and the historical scene image;
  • the application processor is used to generate an aligned scene image aligned with the current scene image according to the relative motion vector and the historical scene image; and perform synthetic noise reduction processing on the current scene image and the aligned scene image to obtain a denoised scene image of the current scene image .
  • the present application also discloses an electronic device, including a processor and a memory, the memory stores a computer program, and the processor executes the image processing method provided by the present application by loading the computer program.
  • FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a noise-reduced scene image obtained by performing synthetic noise reduction processing on a current scene image and an aligned scene image in an embodiment of the present application.
  • FIG. 3 is an example diagram of generating an aligned scene image according to a noise reduction object area in an embodiment of the present application.
  • FIG. 4 is another schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an image processor provided in an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an application scenario of an electronic device provided by an embodiment of the present application.
  • FIG. 8 is another schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present application. As shown in FIG. 1, the process of the image processing method may include:
  • a current scene image and a historical scene image of the shooting scene of the electronic device are acquired.
  • the image processing method provided by the present application may be configured in an electronic device equipped with a camera, to perform noise reduction processing on an image captured by the camera.
  • Noise reduction processing can be generally understood as eliminating the noise in the image, thereby improving the quality of the image.
  • Noise also known as noise or noise, mainly refers to the rough part of the image generated in the process of electronic equipment receiving and outputting light as a received signal, and also refers to the foreign pixels that should not appear in the image, usually caused by electronic interference. .
  • a noisy image looks like a clean image is smudged and full of small rough spots.
  • the electronic device may be a mobile electronic device with a camera such as a smart phone, a tablet computer, a palmtop computer, or a notebook computer, or a stationary electronic device with a camera such as a desktop computer and a TV, which is not specifically limited in this application.
  • This embodiment of the present application does not specifically limit the type and quantity of cameras configured on the electronic device, which can be configured by those of ordinary skill in the art according to actual needs. It can be a low frame rate camera (the shooting frame rate is usually within 100 frames per second, such as 30 frames per second, 60 frames per second, etc.), or it can be a high frame rate camera (the shooting frame rate is usually above 100 frames per second, For example, 120 frames per second); another example, taking the focal length as the classification standard of the camera type, the camera configured by the electronic device can be a standard camera (focal length between 40mm and 60mm), or a wide-angle camera (focal length is below 40mm), It can also be a telephoto camera (with a focal length of 60mm or more).
  • the shooting scene of the electronic device can be understood as the area that the camera configured on the electronic device is aimed at after being enabled, that is, the area where the camera can convert optical signals into corresponding image data.
  • the electronic device enables the camera according to the user operation, if the user controls the camera of the electronic device to aim at an area including an object, the area including the object is the shooting scene of the camera.
  • the current scene image and the historical scene image of the shooting scene of the electronic device are first obtained for subsequent noise reduction processing.
  • the current scene image may be understood as an image obtained by the camera shooting the shooting scene at the current moment
  • the historical scene image may be the image of the shooting scene captured by the camera before the current moment.
  • the electronic device is provided with a buffer space (for example, a part of the memory space is divided into the memory as the buffer space), and the buffer space is used for buffering the images captured by the camera.
  • the buffer space is used for buffering the images captured by the camera.
  • the current scene image of the shooting scene shot by the camera can be directly acquired, and the cached historical scene of the shooting scene can be acquired from the cache space. image.
  • the aforementioned current scene image is also cached in the cache space.
  • this embodiment of the present application does not specifically limit the number of acquired historical scene images, for example, one historical scene image or multiple historical scene images can be acquired. A fixed number of historical scene images can be acquired, or the number of historical scene images to be acquired can be dynamically determined.
  • the relative motion vector of the object area in the shooting scene in the current scene image and the historical scene image is acquired.
  • the object in the shooting scene can be determined, and the object can be determined accordingly.
  • area For example, for an object in the shooting scene, the smallest circumscribed rectangular area of the object is determined as the object area of the object.
  • the relative motion vector of the object area in the shooting scene in the current scene image and the historical scene image may be acquired.
  • the relative motion vector of an object area is obtained by analysis and calculation according to the position and/or size difference of the object area in the current scene image and the historical scene image. It is used to describe the moving direction and moving speed of the object in the object area with the current scene image and the historical scene image as reference.
  • an aligned scene image aligned with the current scene image is generated based on the relative motion vector and the historical scene image.
  • the image content of the object area in the current scene image is known, and the image content of the object area in the historical scene image is also known, and the object area is in the current scene image.
  • Relative motion vectors in scene images and historical scene images are also known. Therefore, according to the obtained relative motion vector and the historical scene image, an image aligned with the current scene image can be generated, which is recorded as the aligned scene image.
  • the number of generated aligned scene images is the same as the number of acquired historical scene images. For example, it is assumed that five historical scene images of the current scene image at different times are acquired, and five aligned scene images are correspondingly generated according to the five historical scene images and their corresponding relative motion vectors.
  • a combined noise reduction process is performed on the current scene image and the aligned scene image to obtain a noise reduction scene image of the current scene image.
  • the embodiment of the present application further performs synthetic noise reduction processing on the current scene image and the generated aligned scene image according to the configured multi-frame noise reduction strategy, so as to obtain the current scene image denoised scene image.
  • the multi-frame noise reduction strategy can be configured as:
  • For a pixel at a pixel position directly calculate the average pixel value of the first pixel value of the pixel in the current scene image and the second pixel value of the pixel in the historical scene image, so that the average pixel value of all pixels is calculated according to the average pixel value of all pixels. value to generate a denoised scene image.
  • For a pixel at a pixel position calculate the average pixel value of the first pixel value of the pixel in the current scene image and the second pixel value of the pixel in the historical scene image; further according to the first pixel value and the second pixel value The difference between each pixel value and the average pixel value is assigned weights for the first pixel value and the second pixel value to perform a weighted average to obtain the weighted average value of the first pixel value and the second pixel value; The weighted average produces a denoised scene image.
  • the embodiment of the present application obtains the current scene image and the historical scene image of the shooting scene of the electronic device, and obtains the relative motion vector of the object area in the shooting scene in the current scene image and the historical scene image, and further according to the obtained
  • the relative motion vector and the historical scene image are generated, and the aligned scene image aligned with the current scene image is generated.
  • the current scene image and the aligned scene image are synthesized and denoised to obtain the denoised scene image of the current scene image.
  • the embodiment of the present application does not need to analyze the noise rules in the image to perform noise reduction processing, but directly uses historical images generated in history to perform synthetic noise reduction processing on the current image, It can not only achieve noise reduction, but also make up for the image loss caused by traditional noise reduction methods, making the image after noise reduction clearer.
  • acquiring the relative motion vector of the object area in the shooting scene in the current scene image and the historical scene image including:
  • the noise reduction processing of the current scene image is implemented only based on some object regions in the shooting scene.
  • the relative motion of the object area in the shooting scene in the current scene image and the historical scene image when acquiring the relative motion of the object area in the shooting scene in the current scene image and the historical scene image, first determine the object area that needs noise reduction processing from the object area of the shooting scene according to the configured object determination strategy, and record is the noise reduction object region, and then only the relative motion vector of the determined noise reduction object region in the current scene image and the historical scene image is obtained, which is recorded as the region relative motion vector.
  • the configuration of the object determination strategy is not specifically limited in the embodiments of the present application, and can be configured by those skilled in the art according to actual needs.
  • the configuration object determination strategy is: The object area is determined as the target object area to be subjected to noise reduction processing. Wherein, there is no specific limitation on how to identify the object area that the user is interested in, and a person skilled in the art can configure an appropriate identification manner according to actual needs.
  • the noise reduction object area that needs to be subjected to noise reduction processing is determined from the object area of the shooting scene, including:
  • the user's perception of each object area in the shooting scene of the electronic device is evaluated.
  • the degree of interest is evaluated, so as to obtain the degree of interest of the user in each object area in the shooting scene of the electronic device.
  • the configuration of the interest degree evaluation policy is not specifically limited here, and can be configured by those skilled in the art according to actual needs.
  • the object region with the highest degree of interest in the object region of the shooting scene is determined as the noise reduction object region, or the object region in the object region of the shooting scene whose interest degree reaches a degree threshold is determined as the noise reduction object region.
  • the value of the degree threshold is not specifically limited in this embodiment of the present application, and can be selected by those skilled in the art according to actual needs.
  • acquiring the user's interest degree in each object area in the shooting scene includes:
  • This embodiment of the present application provides an optional interest degree evaluation strategy.
  • the user when the user operates the electronic device to shoot, the user usually operates the electronic device to focus on the object of interest in the shooting scene. Therefore, the degree of interest can be evaluated according to the focusing situation.
  • the focus distance corresponding to the current scene image is obtained first, and the depth distance of each object area in the shooting scene is obtained.
  • the depth distance of each object area in the shooting scene is obtained.
  • a person skilled in the art can configure the acquisition method of the depth distance according to actual needs.
  • an image of the current scene can be acquired from one of the cameras, and an image of another scene captured by the other camera synchronously can be acquired from the other camera. Since the two cameras are placed side by side, there will be a parallax between the current scene image and the other scene image.
  • a triangular parallax algorithm can be used to calculate the depth distance of each object area according to the current scene image and the aforementioned another scene image.
  • the user's perception of each object area is further obtained according to the difference between the depth distance and the focus distance of each object area. level of interest.
  • the configuration interest degree is negatively correlated with the difference between the depth distance of the object area and the focus distance, that is, for an object area, the greater the difference between the depth distance and the focus distance, it means that the user less interested in it.
  • three interest levels can be divided into low, medium and high, respectively, and three difference intervals between the depth distance and the focus distance can be divided accordingly, namely difference interval A, difference interval B, and difference interval C , associating the difference interval A with the degree of interest "low", the difference interval B with the degree of interest "medium”, and the difference interval C with the degree of interest "high”.
  • the data processing unit 120 evaluates the degree of interest, for an object area, if the depth distance of the object area and the focus distance are located in the difference interval A, the interest degree of the object area is determined to be “low”, if the object area is If the depth distance and the focus distance are in the difference interval B, the interest level is determined to be "medium”, and if the depth distance and the focus distance of the object area are in the difference interval C, the interest degree is determined to be "high”.
  • acquiring the user's interest degree in each object area in the shooting scene includes:
  • the interest degree of each object area is evaluated by the interest metric model, and the interest degree of each object area is obtained.
  • the embodiment of the present application is pre-trained with an interest metric model, and the interest metric model is configured to evaluate the degree of interest of an image, and correspondingly output a numerical value representing the degree of interest.
  • the architecture and training method of the metric model of interest are not specifically limited here, and can be configured by those skilled in the art according to actual needs.
  • the convolutional neural network can be selected as the basic structure of the interest measurement model, and the image samples with the interest degree labels are used for training in a supervised way, so as to obtain the interest degree for evaluating the interest degree.
  • metric model For example, the convolutional neural network can be selected as the basic structure of the interest measurement model, and the image samples with the interest degree labels are used for training in a supervised way, so as to obtain the interest degree for evaluating the interest degree. metric model.
  • the image content of the object region in the current scene image is input into the interest metric model to obtain the degree of interest of the object region, and thus, obtain the interest rate through the object region.
  • generating an aligned scene image aligned with the current scene image according to the relative motion vector and the historical scene image including:
  • the embodiment of the present application further provides a generation method for obtaining an aligned scene image according to the noise reduction object area generation.
  • the aligned image content that is aligned with the first current image content of the noise reduction object area in the current scene image is generated, and the non-noise reduction object area is directly placed in the current scene image.
  • the second current image content in the current scene image is used as the aligned image content aligned with it, and correspondingly generated according to the aligned image content aligned with the first current image content and the second current image content of the non-noise reduction object area in the current scene image.
  • Aligned scene image to align with the current scene image.
  • the shooting scene is divided into three object areas, one of which is determined as the noise reduction object area, and the other two object areas are the non-noise reduction target area A and the non-noise reduction target area B respectively.
  • the aligned aligned image content directly takes the image content of the non-noise reduction object area A in the current scene image as the aligned image content, and the image content of the non-noise reduction target area B in the current scene image as the aligned image content.
  • the aligned image content that is aligned with the first current image content of the noise reduction object area in the current scene image is generated, including: :
  • the historical image content of the noise reduction target area in the historical scene image is remapped, and the alignment with the first current image content of the noise reduction target area in the current scene image is obtained. image content.
  • remapping is to map the original image to another image through a certain mathematical formula. In layman's terms, it is to place an element at a certain position in one image to a specified position in another image.
  • the area relative motion vector of the object area when generating the aligned image content aligned with the first current image content of the noise reduction target area in the current scene image according to the regional relative motion vector of the noise reduction target area and the historical scene image, according to the noise reduction target area
  • the area relative motion vector of the object area remaps the historical image content of the noise reduction object area in the historical scene image, and correspondingly obtains the alignment image content aligned with the first current image content of the noise reduction object area in the current scene image .
  • obtaining the relative motion vector of the noise reduction object region in the current scene image and the historical scene image including:
  • the feature point relative motion vector is used as the region relative motion vector of the noise reduction target region.
  • the embodiment of the present application first determine the feature point determination strategy used to characterize the reduction
  • the feature points of the noise object area are used to represent the noise reduction object area to obtain the relative motion vector of the area. After that, the relative motion vector of the feature point in the current scene image and the historical scene image is further obtained, recorded as the feature point relative motion vector, and the feature point relative motion vector is used as the area relative motion vector of the noise reduction object area.
  • the configuration of the feature point determination strategy is not specifically limited, and can be specifically configured by those skilled in the art according to actual needs.
  • the feature point determination strategy is configured to determine feature points based on characteristics of different objects.
  • the vertex positions of some objects can be selected as feature points, such as the hand, head of a character, etc.; the edges of some objects can be selected as feature points, such as the edge of a car.
  • a feature point in the embodiments of the present application does not refer to a single pixel, but refers to all pixels in a specific relatively fixed small area, such as the fingertip of a finger, the ear of the head, Vertices of moving objects, etc., use all pixels corresponding to these areas as feature points.
  • the relative motion vector of the feature point is used as the relative motion vector of the region, including:
  • the corrected feature point relative motion vector is used as the region relative motion vector of the noise reduction target region.
  • the current motion vector of the electronic device is obtained from the motion sensor (including but not limited to the gyroscope sensor, acceleration sensor, etc.) configured in the electronic device, and the relative motion vector of the aforementioned feature point is corrected according to the current motion vector of the electronic device, and the correction is obtained.
  • the motion sensor including but not limited to the gyroscope sensor, acceleration sensor, etc.
  • V' feature point V feature point - V electronic device
  • the V' feature point represents the relative motion vector of the corrected feature point
  • the V feature point represents the relative motion vector of the feature point
  • the V electronic device represents the current motion vector of the electronic device.
  • the relative motion vector of the modified feature point is directly used as the region relative motion vector of the noise reduction target area.
  • acquiring a historical scene image of a shooting scene of an electronic device includes:
  • a state factor for determining the number of image acquisitions is predefined.
  • the state factor includes at least one of the scene type of the shooting scene, the operating power consumption and operating temperature of the electronic device, and the noise reduction quality of the historical noise reduction scene image of the shooting scene.
  • the historical noise reduction scene image includes an image obtained by performing noise reduction on the historical scene image according to the image processing method provided by the embodiment of the present application. For example, at the current moment, the current scene image is subjected to After noise reduction processing is performed, a noise reduction scene image of the current scene image is correspondingly obtained, and at the next moment, the noise reduction scene image is a historical noise reduction scene image.
  • a corresponding quantity calculation strategy is configured accordingly, and the quantity calculation strategy is used to describe how to calculate and obtain the image acquisition quantity of the historical scene images to be acquired according to the state factor.
  • the configuration of the quantity calculation strategy is not specifically limited here, and can be configured by those skilled in the art according to actual needs.
  • the division method of the scene type can be pre-defined, for example, the shooting scene can be directly divided into a night scene shooting scene and a non-night scene shooting scene according to the brightness of the shooting scene, and the corresponding configuration is based on experience.
  • the number of image acquisitions for the night scene shooting scene, and the number of image acquisitions corresponding to the non-night scene shooting scene, and the corresponding relationship between the foregoing scene types and the number of image acquisitions is used as the quantity calculation strategy;
  • the noise reduction quality of historical noise reduction scene images it is constrained by the negative correlation between the noise reduction quality of historical noise reduction scene images and the number of image acquisitions (that is, the lower the noise reduction quality of historical noise reduction scene images, the lower the noise reduction quality of historical noise reduction scene images. The more the corresponding images are acquired), the quantity calculation strategy corresponding to the noise reduction quality of the historical noise reduction scene images is configured.
  • the embodiment of the present application when acquiring the historical scene images of the shooting scene of the electronic device, the embodiment of the present application first acquires the state factor used to determine the number of image acquisitions; and then calculates the state factor according to the state factor and the corresponding number of state factors. The corresponding number of image acquisitions; finally, the historical scene images of the shooting scene are acquired according to the number of image acquisitions. For example, historical scene images of the number of image acquisitions closest to the current scene image may be acquired.
  • the determined state factor may be one or multiple.
  • the historical scene images of the shooting scene are obtained directly according to the number of image acquisitions calculated according to the state factor; when the determined state factor When there are more than one, the number of image acquisitions is calculated according to the plurality of state factors, the target image acquisition number is determined, and historical scene images of the shooting scene are acquired according to the target image acquisition number. For example, historical scene images of the number of target images closest to the current scene image may be acquired.
  • the average value of the image acquisition numbers corresponding to the multiple state factors can be directly calculated, and the average value can be rounded up (may be rounded up). Or, assign weights to each state factor in advance (which can be assigned by those skilled in the art according to actual needs), and then use the number of image acquisitions calculated according to each of the multiple state factors.
  • the weighted sum of the number of image acquisitions calculated according to each of the multiple state factors is performed, and the weighted sum value is rounded (can be rounded up or down rounded up) as the number of target image acquisitions.
  • acquiring the relative motion vector of the object area in the shooting scene in the current scene image and the historical scene image including:
  • the historical scene images of the shooting scene are acquired according to the determined number of image acquisitions, and correspondingly, the relative motion vector of the object area in the shooting scene in the current scene image and each historical scene image is acquired.
  • generating an aligned scene image aligned with the current scene image according to the relative motion vector and the historical scene image including:
  • the historical scene images of the shooting scene are acquired according to the determined number of image acquisitions.
  • the image and its corresponding relative motion vector generate an aligned scene image aligned with the current scene image, and obtain the aligned scene image of the number of image acquisitions.
  • FIG. 4 is another schematic flowchart of an image processing method provided by an embodiment of the present application.
  • the flowchart of the image processing method may include:
  • a current scene image and a historical scene image of the shooting scene of the electronic device are acquired.
  • the shooting scene of the electronic device can be understood as the area that the camera configured on the electronic device is aimed at after being enabled, that is, the area where the camera can convert optical signals into corresponding image data.
  • the electronic device enables the camera according to the user operation, if the user controls the camera of the electronic device to aim at an area including an object, the area including the object is the shooting scene of the camera.
  • the current scene image and the historical scene image of the shooting scene of the electronic device are first obtained for subsequent noise reduction processing.
  • the current scene image may be understood as an image obtained by the camera shooting the shooting scene at the current moment
  • the historical scene image may be the image of the shooting scene captured by the camera before the current moment.
  • the electronic device is provided with a buffer space (for example, a part of the memory space is divided into the memory as the buffer space), and the buffer space is used for buffering the images captured by the camera.
  • the buffer space is used for buffering the images captured by the camera.
  • the current scene image of the shooting scene shot by the camera can be directly acquired, and the cached historical scene of the shooting scene can be acquired from the cache space. image.
  • the aforementioned current scene image is also cached in the cache space.
  • the object in the shooting scene can be determined, and the object can be determined accordingly.
  • area For example, for an object in the shooting scene, the smallest circumscribed rectangular area of the object is determined as the object area of the object.
  • the relative motion vector of each object area in the current scene image and the historical scene image is obtained according to the feature points of each object area in the shooting scene.
  • the relative motion vector of the feature point in the current scene image and the historical scene image is further acquired, and the relative motion vector of the feature point is used as the relative motion vector of the object area represented by it.
  • the relative motion vector of the feature point when the relative motion vector of the feature point is obtained, the relative motion vector of the feature point is obtained by analysis and calculation according to the position and/or size difference of the feature point in the current scene image and the historical scene image, and the relative motion vector is used for Describe the movement direction and movement speed of the feature point with reference to the current scene image and the historical scene image.
  • the configuration of the feature point determination strategy is not specifically limited, and can be specifically configured by those skilled in the art according to actual needs.
  • the feature point determination strategy is configured to determine feature points based on characteristics of different objects.
  • the vertex positions of some objects can be selected as feature points, such as the hand, head of a character, etc.; the edges of some objects can be selected as feature points, such as the edge of a car.
  • a feature point in the embodiments of the present application does not refer to a single pixel, but refers to all pixels in a specific relatively fixed small area, such as the fingertip of a finger, the ear of the head, Vertices of moving objects, etc., use all pixels corresponding to these areas as feature points.
  • an aligned image content aligned with the image content of each object region in the current scene image is generated.
  • the image content of the object area in the current scene image is known, and the image content of the object area in the historical scene image is also known, and the object area is in the current scene image.
  • Relative motion vectors in scene images and historical scene images are also known. Therefore, for each object area, according to the image content of each object area in the historical scene image and its relative motion vector, the aligned image content aligned with the image content of each object area in the current scene image can be generated.
  • an aligned scene image aligned with the current scene image is generated according to the aligned image content of each object region.
  • an alignment scene aligned with the current scene image can be generated according to the aligned image content of each object area. Images, for example, directly splicing the aligned image content of each object area, and using the spliced image as an aligned scene image aligned with the current scene image.
  • a combined noise reduction process is performed on the current scene image and the aligned scene image to obtain a noise reduction scene image of the current scene image.
  • the embodiment of the present application further performs synthetic noise reduction processing on the current scene image and the generated aligned scene image according to the configured multi-frame noise reduction strategy, so as to obtain the current scene image denoised scene image.
  • the multi-frame noise reduction strategy can be configured as:
  • For a pixel at a pixel position directly calculate the average pixel value of the first pixel value of the pixel in the current scene image and the second pixel value of the pixel in the historical scene image, so that the average pixel value of all pixels is calculated according to the average pixel value of all pixels. value to generate a denoised scene image.
  • For a pixel at a pixel position calculate the average pixel value of the first pixel value of the pixel in the current scene image and the second pixel value of the pixel in the historical scene image; further according to the first pixel value and the second pixel value The difference between each pixel value and the average pixel value is assigned weights for the first pixel value and the second pixel value to perform a weighted average to obtain the weighted average value of the first pixel value and the second pixel value; The weighted average produces a denoised scene image.
  • the present application further provides an image processor 300, the image processor 300 includes a data interface unit 310 and a data processing unit 320, wherein,
  • a data interface unit 310 configured to acquire the current scene image and the historical scene image of the shooting scene of the electronic device
  • a data processing unit 320 configured to obtain the relative motion vector of the object area in the shooting scene in the current scene image and the historical scene image;
  • the current scene image, the historical scene image and the relative motion vector are transmitted to the application processor, so that the application processor generates an aligned scene image aligned with the current scene image according to the relative motion vector and the historical scene image, and compares the current scene image and the aligned scene.
  • the image is subjected to synthetic noise reduction processing to obtain a noise reduction scene image of the current scene image.
  • the image processor 300 provided in this embodiment of the present application may be applied to an electronic device configured with a camera and an application processor, and the image processor 300 and the application processor cooperate to perform noise reduction processing on images captured by the camera.
  • the image processor 300 and the application processor cooperate to perform noise reduction processing on images captured by the camera.
  • the data processing unit 320 is configured to:
  • the application processor obtains the first current image corresponding to the noise reduction object region in the current scene image according to the region relative motion vector and the historical scene image.
  • an aligned image content to which the image content is aligned and generating an aligned scene image according to the aligned image content and a second current image content of the non-noise reduction object area in the current scene image.
  • the data processing unit 320 is configured to:
  • the feature point relative motion vector is taken as the region relative motion vector.
  • the data processing unit 320 is configured to:
  • the corrected feature point relative motion vector is used as the region relative motion vector.
  • the data processing unit 320 is configured to:
  • the noise reduction object region is determined from the object regions of the shooting scene.
  • the data interface unit 310 is used to:
  • the number of image acquisitions corresponding to the state factor is calculated
  • the historical scene images of the shooting scene are acquired according to the number of image acquisitions.
  • the data processing unit 320 is configured to:
  • the application processor generates an alignment aligned with the current scene image according to each historical scene image and its corresponding relative motion vector
  • the scene images are obtained to obtain the aligned scene images of the number of image acquisitions; and the synthetic noise reduction processing is performed on the current scene images and the aligned scene images of the number of image acquisitions to obtain the denoised scene images.
  • the state factor includes at least one of the scene type of the shooting scene, the operating power consumption and operating temperature of the electronic device, and the noise reduction quality of historical noise reduction scene images of the shooting scene.
  • image processor 300 provided in the embodiments of the present application and the image processing methods in the above embodiments belong to the same concept, and the specific implementation process thereof can refer to the above related embodiments, which will not be repeated here.
  • FIG. 6 is a schematic structural diagram of an electronic device 400 provided by the present application
  • FIG. 7 is a schematic diagram of an application scenario of the electronic device 400.
  • the electronic device 400 includes:
  • a camera 410 used for collecting scene images of the shooting scene
  • the image processor 420 is configured to acquire the current scene image and the historical scene image of the shooting scene collected by the camera; and acquire the relative motion vector of the object area in the shooting scene in the current scene image and the historical scene image; and
  • the application processor 430 is used to generate an aligned scene image aligned with the current scene image according to the relative motion vector and the historical scene image; and perform a synthetic noise reduction process on the current scene image and the aligned scene image to obtain a noise reduction scene of the current scene image image.
  • the image processor 420 is configured to:
  • the application processor 430 is configured to:
  • the aligned scene image is generated according to the aligned image content and the second current image content of the non-noise reduction object area in the current scene image.
  • the application processor 430 is configured to:
  • the historical image content of the noise reduction object region in the historical scene image is remapped to obtain the aligned image content.
  • the image processor 420 is configured to:
  • the feature point relative motion vector is taken as the region relative motion vector.
  • the image processor 420 is configured to:
  • the corrected feature point relative motion vector is used as the region relative motion vector.
  • the image processor 420 is configured to:
  • the noise reduction object region is determined from the object regions of the shooting scene.
  • the image processor 420 is configured to:
  • the number of image acquisitions corresponding to the state factor is calculated
  • the historical scene images of the shooting scene are acquired according to the number of image acquisitions.
  • the image processor 420 is configured to:
  • the application processor 430 is configured to:
  • each historical scene image and its corresponding relative motion vector generate an aligned scene image aligned with the current scene image, and obtain the aligned scene image of the number of image acquisitions;
  • the state factor includes at least one of the scene type of the shooting scene, the operating power consumption and operating temperature of the electronic device, and the noise reduction quality of historical noise reduction scene images of the shooting scene.
  • the present application further provides an electronic device 500 .
  • the electronic device 500 may include a memory 510 and a processor 520 .
  • the structure of the electronic device 500 shown in FIG. 8 does not constitute a limitation to the electronic device 500, and may include more or less components than the one shown, or combine some components, or different components layout.
  • the memory 510 may be used to store computer programs and data.
  • the computer program stored in the memory 510 contains executable code.
  • a computer program can be divided into various functional modules.
  • the processor 520 is the control center of the electronic device 500, uses various interfaces and lines to connect various parts of the entire electronic device 500, and executes by running or executing the computer program stored in the memory 510 and calling the data stored in the memory 510. Various functions and processing data of the electronic device 500 to perform overall control of the electronic device 500 .
  • the processor 520 in the electronic device 500 loads executable codes corresponding to one or more computer programs into the memory 510 according to the following sequence, and is executed by the processor 520 to execute the following step:
  • a noise reduction process is performed on the current scene image and the aligned scene image to obtain a noise reduction scene image of the current scene image.
  • the processor 520 when acquiring the relative motion vector of the object area in the shooting scene in the current scene image and the historical scene image, the processor 520 is configured to execute:
  • the processor 520 when generating an aligned scene image aligned with the current scene image according to the relative motion vector and the historical scene image, the processor 520 is configured to execute:
  • the aligned scene image is generated according to the aligned image content and the second current image content of the non-noise reduction object area in the current scene image.
  • the processor 520 when generating the aligned image content aligned with the first current image content of the noise reduction object region in the current scene image according to the region relative motion vector and the historical scene image, uses To execute:
  • the historical image content of the noise reduction object region in the historical scene image is remapped to obtain the aligned image content.
  • the processor 520 when acquiring the relative motion vector of the noise reduction object region in the current scene image and the historical scene image, the processor 520 is configured to execute:
  • the feature point relative motion vector is taken as the region relative motion vector.
  • the processor 520 when the feature point relative motion vector is used as the region relative motion vector, the processor 520 is configured to execute:
  • the corrected feature point relative motion vector is used as the region relative motion vector.
  • the processor 520 when a noise reduction object area that needs noise reduction processing is determined from the object area of the shooting scene, the processor 520 is configured to execute:
  • the noise reduction object region is determined from the object regions of the shooting scene.
  • the processor 520 when acquiring the historical scene image of the shooting scene of the electronic device, the processor 520 is configured to execute:
  • the number of image acquisitions corresponding to the state factor is calculated
  • the historical scene images of the shooting scene are acquired according to the number of image acquisitions.
  • the processor 520 when acquiring the relative motion vector of the object area in the shooting scene in the current scene image and the historical scene image, the processor 520 is configured to execute:
  • the processor 520 when generating an aligned scene image aligned with the current scene image according to the relative motion vector and the historical scene image, the processor 520 is configured to execute:
  • each historical scene image and its corresponding relative motion vector generate an aligned scene image aligned with the current scene image, and obtain the aligned scene image of the number of image acquisitions;
  • the processor 520 When performing synthetic noise reduction processing on the current scene image and the aligned scene image to obtain a noise reduction scene image of the current scene image, the processor 520 is configured to execute:
  • the state factor includes at least one of the scene type of the shooting scene, the operating power consumption and operating temperature of the electronic device, and the noise reduction quality of historical noise reduction scene images of the shooting scene.
  • Embodiments of the present application further provide a computer-readable storage medium, in which a computer program is stored, and the computer program can be loaded by a processor to execute steps in any image processing method provided by the embodiments of the present application.
  • the computer program may cause the processor to perform the following steps:
  • a noise reduction process is performed on the current scene image and the aligned scene image to obtain a noise reduction scene image of the current scene image.
  • the storage medium may include: a read-only memory (ROM, Read Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk or an optical disk, and the like.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • magnetic disk or an optical disk and the like.

Landscapes

  • Image Analysis (AREA)

Abstract

本申请提供一种图像处理方法,通过获取电子设备的拍摄场景的当前场景图像和历史场景图像,获取拍摄场景中的对象区域在当前场景图像和历史场景图像中的相对运动矢量,根据相对运动矢量和历史场景图像生成与当前场景图像对齐的对齐场景图像,对当前场景图像和对齐场景图像合成降噪处理,得到当前场景图像的降噪场景图像。

Description

图像处理方法、图像处理器、电子设备及存储介质
本申请要求于2021年04月29日提交中国专利局、申请号202110476533.7、发明名称为“图像处理方法、图像处理器、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于电子设备技术领域,尤其涉及一种图像处理方法、图像处理器、电子设备及存储介质。
背景技术
如手机、平板电脑等电子设备通常配置有摄像头,从而为用户提供拍照功能,使得用户能够通过这些电子设备随时随地的记录身边发生的事情,看到的景物等。然而,由于电子设备自身硬件的原因,电子设备所拍摄的图像中通常会存在噪点,这些噪点的存在将影响图像的质量。因此,有必要对电子设备拍摄的图像进行降噪处理。
发明内容
本申请实施例提供一种图像处理方法、图像处理器、电子设备及存储介质,能够对电子设备拍摄的图像进行降噪处理,使得图像更加清晰。
本申请公开一种图像处理方法,包括:
获取电子设备的拍摄场景的当前场景图像和历史场景图像;
获取拍摄场景中的对象区域在当前场景图像和历史场景图像中的相对运动矢量;
根据相对运动矢量和历史场景图像,生成与当前场景图像对齐的对齐场景图像;
对当前场景图像和对齐场景图像进行合成降噪处理,得到当前场景图像的降噪场景图像。
本申请还公开一种图像处理器,包括:
数据接口单元,用于获取电子设备的拍摄场景的当前场景图像和历史场景图像;
数据处理单元,用于获取拍摄场景中的对象区域在当前场景图像和历史场景图像中的相对运动矢量;以及
将当前场景图像、历史场景图像以及相对运动矢量传输至应用处理器,使得应用处理器根据相对运动矢量和历史场景图像,生成与当前场景图像对齐的对齐场景图像,并对当前场景图像和对齐场景图像进行合成降噪处理,得到当前场景图像的降噪场景图像。
本申请还公开一种电子设备,包括:
摄像头,用于采集拍摄场景的场景图像;
图像处理器,用于获取摄像头采集的拍摄场景的当前场景图像和历史场景图像;以及获取拍摄场景中的对象区域在当前场景图像和历史场景图像中的相对运动矢量;以及
应用处理器,用于根据相对运动矢量和历史场景图像,生成与当前场景图像对齐的对齐场景图像;以及对当前场景图像和对齐场景图像进行合成降噪处理,得到当前场景图像的降噪场景图像。
本申请还公开一种电子设备,包括处理器和存储器,存储器存储有计算机程序,处理器通过加载计算机程序执行如本申请提供的图像处理方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。
图1为本申请实施例提供的图像处理方法的一流程示意图。
图2为本申请实施例中对当前场景图像和对齐场景图像进行合成降噪处理得到降噪场景图像的示意图。
图3为本申请实施例中根据降噪对象区域生成对齐场景图像的示例图。
图4为本申请实施例提供的图像处理方法的另一流程示意图。
图5为本申请实施例中提供的图像处理器的结构示意图。
图6为本申请实施例提供的电子设备的一结构示意图。
图7为本申请实施例提供的电子设备的应用场景示意图。
图8为本申请实施例提供的电子设备的另一结构示意图。
具体实施方式
请参照图1,图1为本申请实施例提供的图像处理方法的一流程示意图,如图1所示,该图像处理方法的流程可以包括:
在110中,获取电子设备的拍摄场景的当前场景图像和历史场景图像。
应当说明的是,本申请所提供的图像处理方法可以配置在具备摄像头的电子设备中,用于对摄像头拍摄的图像进行降噪处理。降噪处理可以通俗的理解为消除图像中存在的噪点,从而提升图像的质量。噪点也称为噪声或噪音等,主要是指电子设备将光线作为接收信号接收并输出的过程中所产生的图像中的粗糙部分,也指图像中不该出现的外来像素,通常由电子干扰产生。存在噪点的图像看起来就像清晰的图像被弄脏了,而布满一些细小的糙点。
其中,电子设备可以是智能手机、平板电脑、掌上电脑、笔记本电脑等具备摄像头的移动式电子设备,也可以是台式电脑、电视等具备摄像头的固定式电子设备,本申请对此不作具体限制。
本申请实施例对电子设备所配置的摄像头的类型以及数量不做具体限定,可由本领域普通技术人员根据实际需要进行配置,比如,以拍摄帧率作为摄像头类型的划分标准,电子设备配置的摄像头可以为低帧率摄像头(拍摄帧率通常在100帧/秒以内,比如30帧/秒、60帧/秒等),也可以为高帧率摄像头(拍摄帧率通常在100帧/秒以上,比如120帧/秒);又比如,以焦距作为摄像头类型的划分标准,电子设备配置的摄像头可以为标准摄像头(焦距在40mm至60mm之间),也可以为广角摄像头(焦距在40mm以下),还可以为长焦摄像头(焦距在60mm以上)等。
电子设备的拍摄场景可以理解为电子设备所配置的摄像头在使能后所对准的区域,即摄像头能够将光信号转换为对应图像数据的区域。比如,电子设备在根据用户操作使能摄像头之后,若用户控制电子设备的摄像头对准一包括某对象的区域,则包括该对象的区域即为摄像头的拍摄场景。
本申请实施例中,首先获取电子设备的拍摄场景的当前场景图像和历史场景图像,用于后续降噪处理。其中,当前场景图像可以理解为摄像头在当前时刻对拍摄场景进行拍摄所得到的图像,历史场景图像可以为摄像头在当前时刻之前所拍摄得到的拍摄场景的图像。
示例性地,电子设备提供有缓存空间(比如,在内存中划分出一部分内存空间作为缓存空间),该缓存空间用于缓存摄像头所拍摄的图像。另外,相应的,在获取电子设备的拍摄场景的当前场景图像和历史场景图像时,可以从摄像头直接获取其拍摄的拍摄场景的当前场景图像,以及从缓存空间中获取缓存的拍摄场景的历史场景图像。此外,前述当前场景图像还被缓存至缓存空间。
应当说明的是,本申请实施例对于获取的历史场景图像的数量不做具体限制,比如可以获取一个历史场景图像,也可以获取多个历史场景图像,在获取多个历史场景图像时,可以按照固定的数量获取历史场景图像,也可以动态地确定需要获取历史场景图像的数量。
在120中,获取拍摄场景中的对象区域在当前场景图像和历史场景图像中的相对运动矢量。
可以理解的是,对于一拍摄场景而言,其中可能存在各种各样的对象,比如人物、动物、植物以及建筑物等,这些对象将被成像至的摄像头所拍摄得到图像中。
相应的,在获取到拍摄场景的当前场景图像和历史场景图像之后,通过对获取到的当前场景图像和/或历史场景图像进行对象识别,即可确定出拍摄场景中的对象,相应确定出对象区域。比如,对于拍摄场景中的一对象,将该对象的最小外接矩形区域确定为该对象的对象区域。
应当说明的是,由于拍摄场景中存在各种各样的对象,相应将确定出至少一个对象区域。在获取拍摄场景中的对象区域在当前场景图像和历史场景图像中的相对运动矢量时,可以获取拍摄场景中的全部或部分对象区域在当前场景图像和历史场景图像中的相对运动矢量。
比如,在获取一对象区域的相对运动矢量时,根据该对象区域在当前场景图像以及历史场景图像中的位置和/或大小差异,分析计算得到该对象区域的相对运动矢量,该相对运动矢量用于描述该对象区域中对象以当前场景图像和历史场景图像为参照的运动方向和运动速度。
在130中,根据相对运动矢量和历史场景图像,生成与当前场景图像对齐的对齐场景图像。
可以理解的是,对于一对象区域,该对象区域在当前场景图像中的图像内容是已知的,同时该对象区域在历史场景图像中的图像内容也是已知的,并且,该对象区域在当前场景图像和历史场景图像中的相对运动矢量也是已知的。因此,根据获取到的相对运动矢量和历史场景图像,即可生成与当前场景图像对齐的图像,记为对齐场景图像。应当说明的是,生成的对齐场景图像的数量与获取的历史场景图像的数量相同。比如,假设获取了当前场景图像的5个不同时刻的历史场景图像,根据这5个历史场景图像及其各自对应的相对运动矢量,将相应生成5个对齐场景图像。
在140中,对当前场景图像和对齐场景图像进行合成降噪处理,得到当前场景图像的降噪场景图像。
如上,在生成与当前场景图像对齐的对齐场景图像之后,本申请实施例进一步按照配置的多帧降噪策略,对当前场景图像和生成的对齐场景图像进行合成降噪处理,从而得到当前场景图像的降噪场景图像。
比如,请参照图2,假设生成了一个对齐场景图像,如图2所示,无论是当前场景图像还是对齐场景图像,其中均存在一些噪点,而经过合成降噪处理所得到的降噪场景图像则不存在噪点。
应当说明的是,本申请实施例对于多帧降噪策略的配置不做具体限制,可由本领域技术人员根据实际需要进行配置。
示例性地,可以配置多帧降噪策略为:
针对一像素位置的像素点,直接计算该像素点在当前场景图像中的第一像素值和该像素点在历史场景图像中的第二像素值的平均像素值,从而根据所有像素点的平均像素值生成降噪场景图像。
或者,还可以配置多帧降噪策略为:
针对一像素位置的像素点,计算该像素点在当前场景图像中的第一像素值和该像素点在历史场景图像中的第二像素值的平均像素值;进一步根据第一像素值和第二像素值各自与平均像素值的差值,为第一像素值和第二像素值分配权重进行加权平均,得到第一像素值和第二像素值的加权平均值;以此,根据所有像素点的加权平均值生成降噪场景图像。
由上可知,本申请实施例通过获取电子设备的拍摄场景的当前场景图像和历史场景图像,并获取拍摄场景中的对象区域在当前场景图像和历史场景图像中的相对运动矢量,进一步根据获取到的相对运动矢量和历史场景图像,生成与当前场景图像对齐的对齐场景图像,最后对当前场景图像和对齐场景图像进行合成降噪处理,得到当前场景图像的降噪场景图像。以此,相较于传统的降噪方式,本申请实施例并不需要去分析图像中的噪点规律进行降噪处理,而是直接利用历史产生的历史图像来对当前图像进行合成降噪处理,不仅能够实现降噪,更能够弥补传统降噪方式所带来的图像损失,使得降噪后的图像更加清晰。
可选地,在一实施例中,获取拍摄场景中的对象区域在当前场景图像和历史场景图像中的相对运动矢量,包括:
(1)从拍摄场景的对象区域中确定出需要进行降噪处理的降噪对象区域;
(2)获取降噪对象区域在当前场景图像和历史场景图像中的区域相对运动矢量。
可以理解的是,当用户面对拍摄场景时,通常无法关注到其中所有的对象,而是仅关注于其中感兴趣的对象。比如,当用户进行人像拍摄时,通常关注于拍摄场景中的人物。因此,本申请实施例中,仅基于拍摄场景中的部分对象区域来实现对当前场景图像的降噪处理。
其中,在获取拍摄场景中的对象区域在当前场景图像和历史场景图像中的相对运动时,首先按照配置的对象确定策略从拍摄场景的对象区域中确定出需要进行降噪处理的对象区域,记为降噪对象区域,然后仅获取确定出的降噪对象区域在当前场景图像和历史场景图像中的相对运动矢量,记为区域相对运 动矢量。
应当说明的是,本申请实施例中对于对象确定策略的配置不做具体限制,可由本领域技术人员根据实际需要进行配置,比如,配置对象确定策略为:将多个对象区域中用户感兴趣的对象区域确定为需要进行降噪处理的目标对象区域。其中,对于如何识别用户感兴趣的对象区域不做具体限制,可由本领域技术人员根据实际需要配置合适的识别方式。
可选地,在一实施例中,从拍摄场景的对象区域中确定出需要进行降噪处理的降噪对象区域,包括:
(1)获取用户对拍摄场景中每一对象区域的感兴趣程度;
(2)根据每一对象区域的感兴趣程度,从拍摄场景的对象区域中确定出降噪对象区域。
本申请实施例中,从拍摄场景的对象区域中确定出需要进行降噪处理的降噪对象区域时,按照配置的感兴趣程度评估策略,对用户对电子设备的拍摄场景中每一对象区域的感兴趣程度进行评估,从而获取得到用户对电子设备的拍摄场景中每一对象区域的感兴趣程度。此处对感兴趣程度评估策略的配置不作具体限制,可由本领域技术人员根据实际需要进行配置。
如上,在获取到用户对电子设备的拍摄场景中每一各对象区域的感兴趣程度之后,进一步根据每一对象区域的感兴趣程度,从电子设备的拍摄场景的对象区域中确定出需要进行降噪处理的降噪对象区域。
示例性地,将拍摄场景的对象区域中感兴趣程度最高的对象区域确定为降噪对象区域,或者将拍摄场景的对象区域中感兴趣程度达到程度阈值的对象区域确定为降噪对象区域。其中,本申请实施例对于程度阈值的取值不作具体限制,可由本领域技术人员根据实际需要取值。
可选地,在一实施例中,获取用户对拍摄场景中各对象区域的感兴趣程度,包括:
(1)获取对应当前场景图像的对焦距离,以及获取各对象区域的深度距离;
(2)根据各对象区域的深度距离与对焦距离的差值,获取用户对各对象区域的感兴趣程度。
本申请实施例提供一可选地感兴趣程度评估策略。
应当说明的是,用户在操作电子设备进行拍摄时,通常操作电子设备对拍摄场景中感兴趣的对象进行对焦,因此,可以根据对焦情况来进行感兴趣程度的评估。
其中,首先获取对应当前场景图像的对焦距离,以及获取拍摄场景中各对象区域的深度距离。此处对于如何获取各对象区域的深度距离不作具体限制,可由本领域技术人员根据实际需要配置深度距离的获取方式。
示例性地,当电子设备配置有两个并列设置的摄像头时,可从其中一摄像头获取当前场景图像,并从另一摄像头获取其同步拍摄的另一场景图像。由于两个摄像头并列设置,将导致当前场景图像和另一场景图像存在视差。相应的,可采用三角视差算法,根据当前场景图像以及前述另一场景图像,计算得到各对象区域的深度距离。
如上,在获取得到对应当前场景图像的对焦距离,以及获取到拍摄场景中各对象区域的深度距离之后,进一步根据各对象区域的深度距离与对焦距离的差值,获取用户对各对象区域的感兴趣程度。
示例性地,本申请实施例中,配置感兴趣程度与对象区域的深度距离与对焦距离的差值负相关,即,对于一对象区域,其深度距离与对焦距离的差值越大,说明用户对其感兴趣程度越低。
比如,可以划分3个感兴趣程度的等级,分别为低、中和高,相应划分三个深度距离与对焦距离的差值区间,分别为差值区间A、差值区间B以及差值区间C,将差值区间A与感兴趣程度“低”关联,将差值区间B与感兴趣程度“中”关联,以及将差值区间C与感兴趣程度“高”关联。数据处理单元120在进行感兴趣程度的评估时,对于一对象区域,若该对象区域的深度距离与对焦距离位于差值区间A,则确定其感兴趣程度为“低”,若该对象区域的深度距离与对焦距离位于差值区间B,则确定其感兴趣程度为“中”,若该对象区域的深度距离与对焦距离位于差值区间C,则确定其感兴趣程度为“高”。
可选地,在一实施例中,获取用户对拍摄场景中各对象区域的感兴趣程度,包括:
通过感兴趣度量模型对各对象区域的感兴趣程度进行评估,得到各对象区域的感兴趣程度。
应当说明的是,本申请实施例预先训练有感兴趣度量模型,该感兴趣度量模型被配置为对图像的感兴趣程度进行评估,相应输出代表感兴趣程度的数值。此处对该感兴趣度量模型的架构以及训练方式均不作具体限制,可由本领域技术人员根据实际需要进行配置。
比如,可以选取卷积神经网络作为感兴趣度量模型的基础架构,并通过有监督的方式,利用添加有感兴趣程度标签的图像样本进行训练,从而得到用于对感兴趣程度进行评估的感兴趣度量模型。
相应的,在本申请实施例中,对于一对象区域,将该对象区域在当前场景图像中的图像内容输入感兴趣度量模型,得到该对象区域的感兴趣程度,以此,获取通过该感兴趣度量模型,获取到各对象区域的感兴趣程度。
可选地,在一实施例中,根据相对运动矢量和历史场景图像,生成与当前场景图像对齐的对齐场景图像,包括:
(1)根据降噪对象区域的区域相对运动矢量和历史场景图像,生成与降噪对象区域在当前场景图像中的第一当前图像内容所对齐的对齐图像内容;
(2)根据前述对齐图像内容以及非降噪对象区域在当前场景图像中的第二当前图像内容,生成与当前场景图像对齐的对齐场景图像。
本申请实施例中进一步提供根据降噪对象区域生成得到对齐场景图像的生成方式。
其中,根据降噪对象区域的区域相对运动矢量和历史场景图像,生成与降噪对象区域在当前场景图像中的第一当前图像内容所对齐的对齐图像内容,并直接将非降噪对象区域在当前场景图像中的第二当前图像内容作为与其对齐的对齐图像内容,相应根据与第一当前图像内容对齐的对齐图像内容以及非降噪对象区域在当前场景图像中的第二当前图像内容,生成与当前场景图像对齐的对齐场景图像。
比如,请参照图3,拍摄场景被划分为3个对象区域,其中一对象区域被确定为降噪对象区域,另外两个对象区域分别为非降噪对象区域A和非降噪对象区域B。在生成与当前场景图像对齐的对齐场景图像时,根据降噪对象区域在历史场景图像中的图像内容及其区域相对运动矢量,生成与降噪对象区域在当前场景图像中的第一当前图像内容所对齐的对齐图像内容,直接将非降噪对象区域A在当前场景图像中的图像内容作为与其对齐的对齐图像内容,以及将非降噪对象区域B在当前场景图像中的图像内容作为与其对齐的对齐图像内容,最后,将非降噪对象区域A的图像内容、非降噪对象区域B的图像内容以及前述对齐图像内容进行拼接,将拼接得到的图像作为与当前场景图像对齐的对齐场景图像。
可选地,在一实施例中,根据降噪对象区域的区域相对运动矢量和历史场景图像,生成与降噪对象区域在当前场景图像中的第一当前图像内容所对齐的对齐图像内容,包括:
根据降噪对象区域的区域相对运动矢量,对降噪对象区域在历史场景图像中的历史图像内容进行重映射,得到与降噪对象区域在当前场景图像中的第一当前图像内容所对齐的对齐图像内容。
其中,重映射就是将原图像通过一定的数学公式映射到另一幅图像中去,通俗的说,就是把一图像中某位置的元素放置到另一图像中的指定位置。
本申请实施例中,在根据降噪对象区域的区域相对运动矢量和历史场景图像,生成与降噪对象区域在当前场景图像中的第一当前图像内容所对齐的对齐图像内容时,根据降噪对象区域的区域相对运动矢量,对降噪对象区域在历史场景图像中的历史图像内容进行重映射,相应得到与降噪对象区域在当前场景图像中的第一当前图像内容所对齐的对齐图像内容。
可选地,在一实施例中,获取降噪对象区域在当前场景图像和历史场景图像中的区域相对运动矢量,包括:
(1)确定用于表征降噪对象区域的特征点;
(2)获取特征点在当前场景图像和历史场景图像中的特征点相对运动矢量;
(3)将特征点相对运动矢量作为降噪对象区域的区域相对运动矢量。
为提升区域相对运动矢量的获取效率,在获取降噪对象区域在当前场景图像和历史场景图像中的区域相对运动矢量时,本申请实施例中首先按照配置的特征点确定策略确定用于表征降噪对象区域的特征点,使用该特征点代表降噪对象区域进行区域相对运动矢量的获取。之后,进一步获取该特征点在当 前场景图像和历史场景图像中的相对运动矢量,记为特征点相对运动矢量,并将该特征点相对运动矢量作为降噪对象区域的区域相对运动矢量。
应当说明的是,本申请实施例中对于特征点确定策略的配置不做具体限制,具体可由本领域技术人员根据实际需要进行配置。
示例性地,配置特征点确定策略为基于不同对象的特点进行特征点的确定。
比如,可以选择某些对象的顶点位置作为特征点,如人物的手部、头部等;可以选择某些对象的边缘作为特征点,如汽车的边缘等。
应当说明的是,本申请实施例中对于特征点的定义并不是单指某一个像素点,而是指特定的相对固定的小区域的所有像素点,如手指的指尖、头部的耳朵,运动物体的顶点等,将这些区域对应的所有像素点作为特征点。
可选地,在一实施例中,将特征点相对运动矢量作为区域相对运动矢量,包括:
(1)获取电子设备的当前运动矢量;
(2)根据当前运动矢量对特征点相对运动矢量进行修正,得到修正特征点相对运动矢量;
(3)将修正特征点相对运动矢量作为降噪对象区域的区域相对运动矢量。
可以理解的是,电子设备在进行拍摄时,除了拍摄场景中的对象自身会运动之外,电子设备本身也会存在运动,比如用户握持电子设备产生抖动时,将导致电子设备随着用户的抖动而相应运动。因此,本申请实施例中通过消除电子设备自身运动的影响,来提升获取的降噪对象区域的区域相对运动矢量的准确性。
其中,从电子设备配置的运动传感器(包括但不限于陀螺仪传感器、加速度传感器等)获取电子设备的当前运动矢量,并根据电子设备的当前运动矢量对前述特征点相对运动矢量进行修正,得到修正特征点相对运动矢量。
应当说明的是,由于电子设备所拍摄拍摄场景的图像的图像内容受拍摄场景中对象的运动和电子设备的运动共同影响,在根据电子设备的当前运动矢量对前述特征点相对运动矢量进行修正时,按照如下公式进行:
V’特征点=V特征点-V电子设备;
其中,V’特征点表示修正特征点相对运动矢量,V特征点表示特征点相对运动矢量,V电子设备表示电子设备的当前运动矢量。
如上,在完成对特征点相对运动矢量的修正,并得到修正特征点相对运动矢量之后,直接将该修正特征点相对运动矢量作为降噪对象区域的区域相对运动矢量。
可选地,在一实施例中,获取电子设备的拍摄场景的历史场景图像,包括:
(1)获取用于确定图像获取数量的状态因子;
(2)根据状态因子以及对应状态因子的数量计算策略,计算得到状态因子所对应的图像获取数量;
(3)按照图像获取数量获取拍摄场景的历史场景图像。
本申请实施例中,在获取历史场景图像时,并不获取固定数量的历史场景图像,而是动态确定需要获取到的历史场景图像的数量。
应当说明的是,本申请实施例中预先定义有用于确定图像获取数量的状态因子。状态因子包括拍摄场景的场景类型,电子设备的运行功耗、运行温度,拍摄场景的历史降噪场景图像的降噪质量中的至少一种。其中,历史降噪场景图像包括按照本申请实施例提供的图像处理方法对历史场景图像进行降噪所得到的图像,比如,在当前时刻,按照本申请实施例提供的图像处理方法对当前场景图像进行了降噪处理,相应得到当前场景图像的降噪场景图像,在下一时刻,该降噪场景图像即为历史降噪场景图像。
此外,对于不同状态因子,相应配置有与其对应的数量计算策略,该数量计算策略用于描述如何根据状态因子计算得到需要获取的历史场景图像的图像获取数量。此处对数量计算策略的配置不做具体限制,可由本领域技术人员根据实际需要进行配置。
比如,对于状态因子“电子设备的运行功耗”,以电子设备的运行功耗与图像获取数量负相关为约 束(即电子设备的运行功耗越低,对应的图像获取数量越多),配置与电子设备的运行功耗所对应的数量计算策略;
又比如,对于状态因子“拍摄场景的场景类型”,可以预先定义场景类型的划分方式,如根据拍摄场景的亮度将拍摄场景直接换分为夜景拍摄场景和非夜景拍摄场景,并根据经验配置对应夜景拍摄场景的图像获取数量,以及配置对应非夜景拍摄场景的图像获取数量,将前述场景类型和图像获取数量的对应关系作为数量计算策略;
又比如,对于状态因子“历史降噪场景图像的降噪质量”,以历史降噪场景图像的降噪质量与图像获取数量负相关为约束(即历史降噪场景图像的降噪质量越低,对应的图像获取数量越多),配置与历史降噪场景图像的降噪质量所对应的数量计算策略。
如上,本申请实施例在获取电子设备的拍摄场景的历史场景图像时,首先获取用于确定图像获取数量的状态因子;再根据该状态因子以及对应状态因子的数量计算策略,计算得到状态因子所对应的图像获取数量;最后按照图像获取数量获取拍摄场景的历史场景图像。比如,可以获取距离当前场景图像最近的图像获取数量的历史场景图像。
应当说明的是,确定的状态因子可以为一个,也可以为多个,当确定了一个状态因子,直接按照根据该状态因子计算得到图像获取数量获取拍摄场景的历史场景图像;当确定的状态因子为多个时,根据多个状态因子各自计算得到图像获取数量,确定目标图像获取数量,并按照该目标图像获取数量获取拍摄场景的历史场景图像。比如,可以获取距离当前场景图像最近的目标图像获取数量的历史场景图像。
其中,在根据多个状态因子各自计算得到的图像获取数量,确定目标图像获取数量时,可以直接计算多个状态因子对应的图像获取数量的平均值,并对该平均值取整(可向上取整,也可向下取整)后作为目标图像获取数量;或者,预先为各状态因子分配权重(可由本领域技术人员根据实际需要分配),在根据多个状态因子各自计算得到的图像获取数量,确定目标图像获取数量时,根据各状态因子的权重,对根据多个状态因子各自计算得到的图像获取数量进行加权求和,并对加权和值取整(可向上取整,也可向下取整)后作为目标图像获取数量。
可选地,在一实施例中,获取拍摄场景中的对象区域在当前场景图像和历史场景图像中的相对运动矢量,包括:
获取拍摄场景中的对象区域在当前场景图像和每一历史场景图像中的相对运动矢量。
本申请实施例中,按照确定的图像获取数量获取拍摄场景的历史场景图像,相应的,获取拍摄场景中的对象区域在当前场景图像和每一历史场景图像中的相对运动矢量。
比如,假设获取到5个历史场景图像,分别为历史场景图像A、历史场景图像B、历史场景图像C、历史场景图像D以及历史场景图像E,则对于拍摄场景中的一对象区域,则分别获取该对象区域在当前场景图像和历史场景图像A中的相对运动矢量,获取该对象区域在当前场景图像和历史场景图像B中的相对运动矢量,获取该对象区域在当前场景图像和历史场景图像C中的相对运动矢量,获取该对象区域在当前场景图像和历史场景图像D中的相对运动矢量,获取该对象区域在当前场景图像和历史场景图像E中的相对运动矢量。
可选地,在一实施例中,根据相对运动矢量和历史场景图像,生成与当前场景图像对齐的对齐场景图像,包括:
(1)根据每一历史场景图像及其对应的相对运动矢量,生成与当前场景图像对齐的对齐场景图像,得到图像获取数量的对齐场景图像;
(2)对当前场景图像和对齐场景图像进行合成降噪处理,得到当前场景图像的降噪场景图像,包括:
(3)对当前场景图像和图像获取数量的对齐场景图像进行合成降噪处理,得到降噪场景图像。
本申请实施例中,按照确定的图像获取数量获取拍摄场景的历史场景图像,相应的,在根据相对运动矢量和历史场景图像,生成与当前场景图像对齐的对齐场景图像时,根据每一历史场景图像及其对应的相对运动矢量,生成与当前场景图像对齐的对齐场景图像,得到图像获取数量的对齐场景图像。其 中,对于如何生成对齐场景图像,具体可参照以上实施例中相关描述,此处不再赘述。
在对当前场景图像和对齐场景图像进行合成降噪处理,得到当前场景图像的降噪场景图像,相应的,对当前场景图像和图像获取数量的对齐场景图像进行合成降噪处理,得到降噪场景图像。其中对于如何进行合成降噪处理,具体可参照以上实施例中的相关描述,此处不再赘述。
请参照图4,图4为本申请实施例提供的图像处理方法的另一流程示意图,如图4所示,该图像处理方法的流程可以包括:
在210中,获取电子设备的拍摄场景的当前场景图像和历史场景图像。
电子设备的拍摄场景可以理解为电子设备所配置的摄像头在使能后所对准的区域,即摄像头能够将光信号转换为对应图像数据的区域。比如,电子设备在根据用户操作使能摄像头之后,若用户控制电子设备的摄像头对准一包括某对象的区域,则包括该对象的区域即为摄像头的拍摄场景。
本申请实施例中,首先获取电子设备的拍摄场景的当前场景图像和历史场景图像,用于后续降噪处理。其中,当前场景图像可以理解为摄像头在当前时刻对拍摄场景进行拍摄所得到的图像,历史场景图像可以为摄像头在当前时刻之前所拍摄得到的拍摄场景的图像。
示例性地,电子设备提供有缓存空间(比如,在内存中划分出一部分内存空间作为缓存空间),该缓存空间用于缓存摄像头所拍摄的图像。另外,相应的,在获取电子设备的拍摄场景的当前场景图像和历史场景图像时,可以从摄像头直接获取其拍摄的拍摄场景的当前场景图像,以及从缓存空间中获取缓存的拍摄场景的历史场景图像。此外,前述当前场景图像还被缓存至缓存空间。
在220中,根据拍摄场景中每一对象区域的特征点,获取每一对象区域在当前场景图像和历史场景图像中的相对运动矢量。
可以理解的是,对于一拍摄场景而言,其中可能存在各种各样的对象,比如人物、动物、植物以及建筑物等,这些对象将被成像至的摄像头所拍摄得到图像中。
相应的,在获取到拍摄场景的当前场景图像和历史场景图像之后,通过对获取到的当前场景图像和/或历史场景图像进行对象识别,即可确定出拍摄场景中的对象,相应确定出对象区域。比如,对于拍摄场景中的一对象,将该对象的最小外接矩形区域确定为该对象的对象区域。
应当说明的是,由于拍摄场景中存在各种各样的对象,相应将确定出至少一个对象区域。本申请实施例中,针对拍摄场景中的每一对象区域,根据拍摄场景中每一对象区域的特征点,获取每一对象区域在当前场景图像和历史场景图像中的相对运动矢量。
为提升相对运动矢量的获取效率,对于一对象区域,本申请实施例中首先按照配置的特征点确定策略确定用于表征该对象区域的特征点,使用该特征点代表该对象区域进行相对运动矢量的获取。之后,进一步获取该特征点在当前场景图像和历史场景图像中的相对运动矢量,并将该特征点的相对运动矢量作为其表征的对象区域的相对运动矢量。
其中,在获取特征点的相对运动矢量时,根据该特征点在当前场景图像以及历史场景图像中的位置和/或大小差异,分析计算得到该特征点的相对运动矢量,该相对运动矢量用于描述该特征点以当前场景图像和历史场景图像为参照的运动方向和运动速度。
应当说明的是,本申请实施例中对于特征点确定策略的配置不做具体限制,具体可由本领域技术人员根据实际需要进行配置。
示例性地,配置特征点确定策略为基于不同对象的特点进行特征点的确定。
比如,可以选择某些对象的顶点位置作为特征点,如人物的手部、头部等;可以选择某些对象的边缘作为特征点,如汽车的边缘等。
应当说明的是,本申请实施例中对于特征点的定义并不是单指某一个像素点,而是指特定的相对固定的小区域的所有像素点,如手指的指尖、头部的耳朵,运动物体的顶点等,将这些区域对应的所有像素点作为特征点。
在230中,根据每一对象区域在历史场景图像中的图像内容及其相对运动矢量,生成与每一对象区域在当前场景图像中的图像内容所对齐的对齐图像内容。
可以理解的是,对于一对象区域,该对象区域在当前场景图像中的图像内容是已知的,同时该对象区域在历史场景图像中的图像内容也是已知的,并且,该对象区域在当前场景图像和历史场景图像中的相对运动矢量也是已知的。因此,对于每一对象区域,根据每一对象区域在历史场景图像中的图像内容及其相对运动矢量,即可生成与每一对象区域在当前场景图像中的图像内容所对齐的对齐图像内容。
在240中,根据每一对象区域的对齐图像内容,生成与当前场景图像对齐的对齐场景图像。
可以理解的是,至此已将拍摄场景中每一对象区域在历史场景图像和当前场景图像中的图像内容对齐,根据每一对象区域的对齐图像内容,即可生成与当前场景图像对齐的对齐场景图像,比如,直接拼接每一对象区域的对齐图像内容,将拼接得到的图像作为与当前场景图像对齐的对齐场景图像。
在250中,对当前场景图像和对齐场景图像进行合成降噪处理,得到当前场景图像的降噪场景图像。
如上,在生成与当前场景图像对齐的对齐场景图像之后,本申请实施例进一步按照配置的多帧降噪策略,对当前场景图像和生成的对齐场景图像进行合成降噪处理,从而得到当前场景图像的降噪场景图像。
比如,请参照图2,假设生成了一个对齐场景图像,如图2所示,无论是当前场景图像还是对齐场景图像,其中均存在一些噪点,而经过合成降噪处理所得到的降噪场景图像则不存在噪点。
应当说明的是,本申请实施例对于多帧降噪策略的配置不做具体限制,可由本领域技术人员根据实际需要进行配置。
示例性地,可以配置多帧降噪策略为:
针对一像素位置的像素点,直接计算该像素点在当前场景图像中的第一像素值和该像素点在历史场景图像中的第二像素值的平均像素值,从而根据所有像素点的平均像素值生成降噪场景图像。
或者,还可以配置多帧降噪策略为:
针对一像素位置的像素点,计算该像素点在当前场景图像中的第一像素值和该像素点在历史场景图像中的第二像素值的平均像素值;进一步根据第一像素值和第二像素值各自与平均像素值的差值,为第一像素值和第二像素值分配权重进行加权平均,得到第一像素值和第二像素值的加权平均值;以此,根据所有像素点的加权平均值生成降噪场景图像。
请参照图5,本申请还提供一种图像处理器300,该图像处理器300包括数据接口单元310和数据处理单元320,其中,
数据接口单元310,用于获取电子设备的拍摄场景的当前场景图像和历史场景图像;
数据处理单元320,用于获取拍摄场景中的对象区域在当前场景图像和历史场景图像中的相对运动矢量;以及
将当前场景图像、历史场景图像以及相对运动矢量传输至应用处理器,使得应用处理器根据相对运动矢量和历史场景图像,生成与当前场景图像对齐的对齐场景图像,并对当前场景图像和对齐场景图像进行合成降噪处理,得到当前场景图像的降噪场景图像。
应当说明的是,本申请实施例提供的图像处理器300可以应用于配置有摄像头和应用处理器的电子设备中,由图像处理器300和应用处理器协同对摄像头拍摄的图像进行降噪处理。具体可参照以上实施例中的相关描述,此处不再赘述。
可选地,在一实施例中,数据处理单元320用于:
从拍摄场景的对象区域中确定出需要进行降噪处理的降噪对象区域;
获取降噪对象区域在当前场景图像和历史场景图像中的区域相对运动矢量,以使得应用处理器根据区域相对运动矢量和历史场景图像,生成与降噪对象区域在当前场景图像中的第一当前图像内容所对齐的对齐图像内容;以及根据对齐图像内容以及非降噪对象区域在当前场景图像中的第二当前图像内容,生成对齐场景图像。
可选地,在一实施例中,数据处理单元320用于:
确定用于表征降噪对象区域的特征点;
获取特征点在当前场景图像和历史场景图像中的特征点相对运动矢量;
将特征点相对运动矢量作为区域相对运动矢量。
可选地,在一实施例中,数据处理单元320用于:
获取电子设备的当前运动矢量;
根据当前运动矢量对特征点相对运动矢量进行修正,得到修正特征点相对运动矢量;
将修正特征点相对运动矢量作为区域相对运动矢量。
可选地,在一实施例中,数据处理单元320用于:
获取用户对拍摄场景中每一对象区域的感兴趣程度;
根据每一对象区域的感兴趣程度,从拍摄场景的对象区域中确定出降噪对象区域。
可选地,在一实施例中,数据接口单元310用于:
获取用于确定图像获取数量的状态因子;
根据状态因子以及对应状态因子的数量计算策略,计算得到状态因子所对应的图像获取数量;
按照图像获取数量获取拍摄场景的历史场景图像。
可选地,在一实施例中,数据处理单元320用于:
获取拍摄场景中的对象区域在当前场景图像和每一历史场景图像中的相对运动矢量,以使得应用处理器根据每一历史场景图像及其对应的相对运动矢量,生成与当前场景图像对齐的对齐场景图像,得到图像获取数量的对齐场景图像;以及对当前场景图像和图像获取数量的对齐场景图像进行合成降噪处理,得到降噪场景图像。
可选地,在一实施例中,状态因子包括拍摄场景的场景类型,电子设备的运行功耗、运行温度,拍摄场景的历史降噪场景图像的降噪质量中的至少一种。
应当说明的是,本申请实施例提供的图像处理器300与上文实施例中的图像处理方法属于同一构思,其具体实现过程请参照以上相关实施例,此处不再赘述。
请结合参照图6和图7,图6为本申请还提供一种电子设备400的结构示意图,图7为电子设备400的应用场景示意图,如图6所示,该电子设备400包括:
摄像头410,用于采集拍摄场景的场景图像;
图像处理器420,用于获取摄像头采集的拍摄场景的当前场景图像和历史场景图像;以及获取拍摄场景中的对象区域在当前场景图像和历史场景图像中的相对运动矢量;以及
应用处理器430,用于根据相对运动矢量和历史场景图像,生成与当前场景图像对齐的对齐场景图像;以及对当前场景图像和对齐场景图像进行合成降噪处理,得到当前场景图像的降噪场景图像。
可选地,在一实施例中,图像处理器420用于:
从拍摄场景的对象区域中确定出需要进行降噪处理的降噪对象区域;
获取降噪对象区域在当前场景图像和历史场景图像中的区域相对运动矢量。
可选地,在一实施例中,应用处理器430用于:
根据区域相对运动矢量和历史场景图像,生成与降噪对象区域在当前场景图像中的第一当前图像内容所对齐的对齐图像内容;
根据对齐图像内容以及非降噪对象区域在当前场景图像中的第二当前图像内容,生成对齐场景图像。
可选地,在一实施例中,应用处理器430用于:
根据区域相对运动矢量,对降噪对象区域在历史场景图像中的历史图像内容进行重映射,得到对齐图像内容。
可选地,在一实施例中,图像处理器420用于:
确定用于表征降噪对象区域的特征点;
获取特征点在当前场景图像和历史场景图像中的特征点相对运动矢量;
将特征点相对运动矢量作为区域相对运动矢量。
可选地,在一实施例中,图像处理器420用于:
获取电子设备400的当前运动矢量;
根据当前运动矢量对特征点相对运动矢量进行修正,得到修正特征点相对运动矢量;
将修正特征点相对运动矢量作为区域相对运动矢量。
可选地,在一实施例中,图像处理器420用于:
获取用户对拍摄场景中每一对象区域的感兴趣程度;
根据每一对象区域的感兴趣程度,从拍摄场景的对象区域中确定出降噪对象区域。
可选地,在一实施例中,图像处理器420用于:
获取用于确定图像获取数量的状态因子;
根据状态因子以及对应状态因子的数量计算策略,计算得到状态因子所对应的图像获取数量;
按照图像获取数量获取拍摄场景的历史场景图像。
可选地,在一实施例中,图像处理器420用于:
获取拍摄场景中的对象区域在当前场景图像和每一历史场景图像中的相对运动矢量。
可选地,在一实施例中,应用处理器430用于:
根据每一历史场景图像及其对应的相对运动矢量,生成与当前场景图像对齐的对齐场景图像,得到图像获取数量的对齐场景图像;以及
对当前场景图像和图像获取数量的对齐场景图像进行合成降噪处理,得到降噪场景图像。
可选地,在一实施例中,状态因子包括拍摄场景的场景类型,电子设备的运行功耗、运行温度,拍摄场景的历史降噪场景图像的降噪质量中的至少一种。
应当说明的是,本申请实施例提供的电子设备400与上文实施例中的图像处理方法属于同一构思,其具体实现过程请参照以上相关实施例,此处不再赘述。
请参照图8,本申请还提供一种电子设备500,如图8所示,该电子设备500可以包括存储器510和处理器520。本领域技术人员可以理解,图8中示出的电子设备500结构并不构成对电子设备500的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
其中,存储器510可用于存储计算机程序和数据。存储器510存储的计算机程序中包含有可执行代码。计算机程序可以划分为各种功能模块。
处理器520是电子设备500的控制中心,利用各种接口和线路连接整个电子设备500的各个部分,通过运行或执行存储在存储器510内的计算机程序,以及调用存储在存储器510内的数据,执行电子设备500的各种功能和处理数据,从而对电子设备500进行整体控制。
在本申请实施例中,电子设备500中的处理器520会按照如下的顺序,将一个或一个以上的计算机程序对应的可执行代码加载到存储器510中,并由处理器520来执行从而执行如下步骤:
获取电子设备500的拍摄场景的当前场景图像和历史场景图像;
获取拍摄场景中的对象区域在当前场景图像和历史场景图像中的相对运动矢量;
根据相对运动矢量和历史场景图像,生成与当前场景图像对齐的对齐场景图像;
对当前场景图像和对齐场景图像进行合成降噪处理,得到当前场景图像的降噪场景图像。
可选地,在一实施例中,在获取拍摄场景中的对象区域在当前场景图像和历史场景图像中的相对运动矢量时,处理器520用于执行:
从拍摄场景的对象区域中确定出需要进行降噪处理的降噪对象区域;
获取降噪对象区域在当前场景图像和历史场景图像中的区域相对运动矢量。
可选地,在一实施例中,在根据相对运动矢量和历史场景图像,生成与当前场景图像对齐的对齐场景图像时,处理器520用于执行:
根据区域相对运动矢量和历史场景图像,生成与降噪对象区域在当前场景图像中的第一当前图像内容所对齐的对齐图像内容;
根据对齐图像内容以及非降噪对象区域在当前场景图像中的第二当前图像内容,生成对齐场景图 像。
可选地,在一实施例中,在根据区域相对运动矢量和历史场景图像,生成与降噪对象区域在当前场景图像中的第一当前图像内容所对齐的对齐图像内容时,处理器520用于执行:
根据区域相对运动矢量,对降噪对象区域在历史场景图像中的历史图像内容进行重映射,得到对齐图像内容。
可选地,在一实施例中,在获取降噪对象区域在当前场景图像和历史场景图像中的区域相对运动矢量时,处理器520用于执行:
确定用于表征降噪对象区域的特征点;
获取特征点在当前场景图像和历史场景图像中的特征点相对运动矢量;
将特征点相对运动矢量作为区域相对运动矢量。
可选地,在一实施例中,在将特征点相对运动矢量作为区域相对运动矢量时,处理器520用于执行:
获取电子设备的当前运动矢量;
根据当前运动矢量对特征点相对运动矢量进行修正,得到修正特征点相对运动矢量;
将修正特征点相对运动矢量作为区域相对运动矢量。
可选地,在一实施例中,在从拍摄场景的对象区域中确定出需要进行降噪处理的降噪对象区域时,处理器520用于执行:
获取用户对拍摄场景中每一对象区域的感兴趣程度;
根据每一对象区域的感兴趣程度,从拍摄场景的对象区域中确定出降噪对象区域。
可选地,在一实施例中,在获取电子设备的拍摄场景的历史场景图像时,处理器520用于执行:
获取用于确定图像获取数量的状态因子;
根据状态因子以及对应状态因子的数量计算策略,计算得到状态因子所对应的图像获取数量;
按照图像获取数量获取拍摄场景的历史场景图像。
可选地,在一实施例中,在获取拍摄场景中的对象区域在当前场景图像和历史场景图像中的相对运动矢量时,处理器520用于执行:
获取拍摄场景中的对象区域在当前场景图像和每一历史场景图像中的相对运动矢量。
可选地,在一实施例中,在根据相对运动矢量和历史场景图像,生成与当前场景图像对齐的对齐场景图像时,处理器520用于执行:
根据每一历史场景图像及其对应的相对运动矢量,生成与当前场景图像对齐的对齐场景图像,得到图像获取数量的对齐场景图像;
在对当前场景图像和对齐场景图像进行合成降噪处理,得到当前场景图像的降噪场景图像时,处理器520用于执行:
对当前场景图像和图像获取数量的对齐场景图像进行合成降噪处理,得到降噪场景图像。
可选地,在一实施例中,状态因子包括拍摄场景的场景类型,电子设备的运行功耗、运行温度,拍摄场景的历史降噪场景图像的降噪质量中的至少一种。
应当说明的是,本申请实施例提供的电子设备400与上文实施例中的图像处理方法属于同一构思,其具体实现过程请参照以上相关实施例,此处不再赘述。
本申请实施例还提供一种计算机可读的存储介质,其中存储有计算机程序,该计算机程序能够被处理器进行加载,以执行本申请实施例所提供的任一种图像处理方法中的步骤。例如,该计算机程序可以使得处理器执行如下步骤:
获取电子设备的拍摄场景的当前场景图像和历史场景图像;
获取拍摄场景中的对象区域在当前场景图像和历史场景图像中的相对运动矢量;
根据相对运动矢量和历史场景图像,生成与当前场景图像对齐的对齐场景图像;
对当前场景图像和对齐场景图像进行合成降噪处理,得到当前场景图像的降噪场景图像。
其中,该存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
由于该计算机可读存储介质中所存储的计算机程序,可以执行本申请实施例所提供的任一种图像处理方法中的步骤,因此,可以实现本申请实施例所提供的任一种图像处理方法所能实现的有益效果,详见前面的实施例,在此不再赘述。
以上对本申请实施例提供的图像处理方法、图像处理器、电子设备以及存储介质进行了详细介绍。本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请。同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种图像处理方法,其中,包括:
    获取电子设备的拍摄场景的当前场景图像和历史场景图像;
    获取所述拍摄场景中的对象区域在所述当前场景图像和所述历史场景图像中的相对运动矢量;
    根据所述相对运动矢量和所述历史场景图像,生成与所述当前场景图像对齐的对齐场景图像;
    对所述当前场景图像和所述对齐场景图像进行合成降噪处理,得到所述当前场景图像的降噪场景图像。
  2. 如权利要求1所述的方法,其中,所述获取所述拍摄场景中的对象区域在所述当前场景图像和所述历史场景图像中的相对运动矢量,包括:
    从所述拍摄场景的对象区域中确定出需要进行降噪处理的降噪对象区域;
    获取所述降噪对象区域在所述当前场景图像和所述历史场景图像中的区域相对运动矢量。
  3. 如权利要求2所述的方法,其中,所述根据所述相对运动矢量和所述历史场景图像,生成与所述当前场景图像对齐的对齐场景图像,包括:
    根据所述区域相对运动矢量和所述历史场景图像,生成与所述降噪对象区域在所述当前场景图像中的第一当前图像内容所对齐的对齐图像内容;
    根据所述对齐图像内容以及非降噪对象区域在所述当前场景图像中的第二当前图像内容,生成所述对齐场景图像。
  4. 如权利要求3所述的方法,其中,所述根据所述区域相对运动矢量和所述历史场景图像,生成与所述降噪对象区域在所述当前场景图像中的第一当前图像内容所对齐的对齐图像内容,包括:
    根据所述区域相对运动矢量,对所述降噪对象区域在所述历史场景图像中的历史图像内容进行重映射,得到所述对齐图像内容。
  5. 如权利要求2所述的方法,其中,所述获取所述降噪对象区域在所述当前场景图像和所述历史场景图像中的区域相对运动矢量,包括:
    确定用于表征所述降噪对象区域的特征点;
    获取所述特征点在所述当前场景图像和所述历史场景图像中的特征点相对运动矢量;
    将所述特征点相对运动矢量作为所述区域相对运动矢量。
  6. 如权利要求5所述的方法,其中,所述将所述特征点相对运动矢量作为所述区域相对运动矢量,包括:
    获取所述电子设备的当前运动矢量;
    根据所述当前运动矢量对所述特征点相对运动矢量进行修正,得到修正特征点相对运动矢量;
    将所述修正特征点相对运动矢量作为所述区域相对运动矢量。
  7. 如权利要求2所述的方法,其中,所述从所述拍摄场景的对象区域中确定出需要进行降噪处理的降噪对象区域,包括:
    获取用户对所述拍摄场景中每一对象区域的感兴趣程度;
    根据每一对象区域的感兴趣程度,从所述拍摄场景的对象区域中确定出所述降噪对象区域。
  8. 如权利要求1-7任一项所述的方法,其中,所述获取电子设备的拍摄场景的历史场景图像,包括:
    获取用于确定图像获取数量的状态因子;
    根据所述状态因子以及对应所述状态因子的数量计算策略,计算得到所述状态因子所对应的图像获取数量;
    按照所述图像获取数量获取所述拍摄场景的历史场景图像。
  9. 如权利要求8所述的方法,其中,所述获取所述拍摄场景中的对象区域在所述当前场景图像和所述历史场景图像中的相对运动矢量,包括:
    获取所述拍摄场景中的对象区域在所述当前场景图像和每一所述历史场景图像中的相对运动矢量。
  10. 如权利要求9所述的方法,其中,所述根据所述相对运动矢量和所述历史场景图像,生成与所述当前场景图像对齐的对齐场景图像,包括:
    根据每一历史场景图像及其对应的相对运动矢量,生成与所述当前场景图像对齐的对齐场景图像,得到所述图像获取数量的对齐场景图像;
    所述对所述当前场景图像和所述对齐场景图像进行合成降噪处理,得到所述当前场景图像的降噪场景图像,包括:
    对所述当前场景图像和所述图像获取数量的对齐场景图像进行合成降噪处理,得到所述降噪场景图像。
  11. 如权利要求8所述的方法,其中,所述状态因子包括所述拍摄场景的场景类型,所述电子设备的运行功耗、运行温度,所述拍摄场景的历史降噪场景图像的降噪质量中的至少一种。
  12. 一种图像处理器,其中,包括:
    数据接口单元,用于获取电子设备的拍摄场景的当前场景图像和历史场景图像;
    数据处理单元,用于获取所述拍摄场景中的对象区域在所述当前场景图像和所述历史场景图像中的相对运动矢量;以及
    将所述当前场景图像、所述历史场景图像以及所述相对运动矢量传输至应用处理器,使得所述应用处理器根据所述相对运动矢量和所述历史场景图像,生成与所述当前场景图像对齐的对齐场景图像,并对所述当前场景图像和所述对齐场景图像进行合成降噪处理,得到所述当前场景图像的降噪场景图像。
  13. 一种电子设备,其中,包括:
    摄像头,用于采集拍摄场景的场景图像;
    图像处理器,用于获取所述摄像头采集的所述拍摄场景的当前场景图像和历史场景图像;以及获取所述拍摄场景中的对象区域在所述当前场景图像和所述历史场景图像中的相对运动矢量;以及
    应用处理器,用于根据所述相对运动矢量和所述历史场景图像,生成与所述当前场景图像对齐的对齐场景图像;以及对所述当前场景图像和所述对齐场景图像进行合成降噪处理,得到所述当前场景图像的降噪场景图像。
  14. 一种电子设备,所述电子设备包括处理器和存储器,所述存储器存储有计算机程序,其中,所述处理器通过调用所述存储器中存储的所述计算机程序,用于执行:
    获取电子设备的拍摄场景的当前场景图像和历史场景图像;
    获取所述拍摄场景中的对象区域在所述当前场景图像和所述历史场景图像中的相对运动矢量;
    根据所述相对运动矢量和所述历史场景图像,生成与所述当前场景图像对齐的对齐场景图像;
    对所述当前场景图像和所述对齐场景图像进行合成降噪处理,得到所述当前场景图像的降噪场景图像。
  15. 如权利要求14所述的电子设备,其中,在获取所述拍摄场景中的对象区域在所述当前场景图像和所述历史场景图像中的相对运动矢量时,所述处理器用于执行:
    从所述拍摄场景的对象区域中确定出需要进行降噪处理的降噪对象区域;
    获取所述降噪对象区域在所述当前场景图像和所述历史场景图像中的区域相对运动矢量。
  16. 如权利要求15所述的电子设备,其中,在根据所述相对运动矢量和所述历史场景图像,生成与所述当前场景图像对齐的对齐场景图像时,所述处理器用于执行:
    根据所述区域相对运动矢量和所述历史场景图像,生成与所述降噪对象区域在所述当前场景图像中的第一当前图像内容所对齐的对齐图像内容;
    根据所述对齐图像内容以及非降噪对象区域在所述当前场景图像中的第二当前图像内容,生成所述对齐场景图像。
  17. 如权利要求16所述的电子设备,其中,在根据所述区域相对运动矢量和所述历史场景图像,生成与所述降噪对象区域在所述当前场景图像中的第一当前图像内容所对齐的对齐图像内容时,所述处 理器用于执行:
    根据所述区域相对运动矢量,对所述降噪对象区域在所述历史场景图像中的历史图像内容进行重映射,得到所述对齐图像内容。
  18. 如权利要求15所述的电子设备,其中,在获取所述降噪对象区域在所述当前场景图像和所述历史场景图像中的区域相对运动矢量时,所述处理器用于执行:
    确定用于表征所述降噪对象区域的特征点;
    获取所述特征点在所述当前场景图像和所述历史场景图像中的特征点相对运动矢量;
    将所述特征点相对运动矢量作为所述区域相对运动矢量。
  19. 如权利要求18所述的电子设备,其中,在将所述特征点相对运动矢量作为所述区域相对运动矢量时,所述处理器用于执行:
    获取所述电子设备的当前运动矢量;
    根据所述当前运动矢量对所述特征点相对运动矢量进行修正,得到修正特征点相对运动矢量;
    将所述修正特征点相对运动矢量作为所述区域相对运动矢量。
  20. 一种存储介质,其上存储有计算机程序,其中,当所述计算机程序被处理器加载时执行如权利要求1-11任一项所述的图像处理方法。
PCT/CN2022/081493 2021-04-29 2022-03-17 图像处理方法、图像处理器、电子设备及存储介质 WO2022227916A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110476533.7 2021-04-29
CN202110476533.7A CN115272088A (zh) 2021-04-29 2021-04-29 图像处理方法、图像处理器、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022227916A1 true WO2022227916A1 (zh) 2022-11-03

Family

ID=83744968

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/081493 WO2022227916A1 (zh) 2021-04-29 2022-03-17 图像处理方法、图像处理器、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN115272088A (zh)
WO (1) WO2022227916A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101815227A (zh) * 2009-02-19 2010-08-25 索尼株式会社 图像处理设备和方法
CN102611826A (zh) * 2011-01-21 2012-07-25 索尼公司 图像处理装置、图像处理方法以及程序
US20160100103A1 (en) * 2014-10-06 2016-04-07 Canon Kabushiki Kaisha Image processing device that synthesizes a plurality of images, method of controlling the same, storage medium, and image pickup apparatus
CN111369469A (zh) * 2020-03-10 2020-07-03 北京爱笔科技有限公司 图像处理方法、装置及电子设备
CN111915505A (zh) * 2020-06-18 2020-11-10 北京迈格威科技有限公司 图像处理方法、装置、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101815227A (zh) * 2009-02-19 2010-08-25 索尼株式会社 图像处理设备和方法
CN102611826A (zh) * 2011-01-21 2012-07-25 索尼公司 图像处理装置、图像处理方法以及程序
US20160100103A1 (en) * 2014-10-06 2016-04-07 Canon Kabushiki Kaisha Image processing device that synthesizes a plurality of images, method of controlling the same, storage medium, and image pickup apparatus
CN111369469A (zh) * 2020-03-10 2020-07-03 北京爱笔科技有限公司 图像处理方法、装置及电子设备
CN111915505A (zh) * 2020-06-18 2020-11-10 北京迈格威科技有限公司 图像处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN115272088A (zh) 2022-11-01

Similar Documents

Publication Publication Date Title
CN111402135B (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
CN108322646B (zh) 图像处理方法、装置、存储介质及电子设备
CN115442515B (zh) 图像处理方法和设备
CN110149482B (zh) 对焦方法、装置、电子设备和计算机可读存储介质
CN109348089B (zh) 夜景图像处理方法、装置、电子设备及存储介质
WO2019105154A1 (en) Image processing method, apparatus and device
US9615039B2 (en) Systems and methods for reducing noise in video streams
CN109218628A (zh) 图像处理方法、装置、电子设备及存储介质
CN113286194A (zh) 视频处理方法、装置、电子设备及可读存储介质
US10600189B1 (en) Optical flow techniques for event cameras
CN111614867B (zh) 一种视频去噪方法、装置、移动终端和存储介质
CN110991287A (zh) 一种实时视频流人脸检测跟踪方法及检测跟踪系统
JP7334432B2 (ja) 物体追跡装置、監視システムおよび物体追跡方法
CN111951192A (zh) 一种拍摄图像的处理方法及拍摄设备
CN113313626A (zh) 图像处理方法、装置、电子设备及存储介质
CN117408890A (zh) 一种视频图像传输质量增强方法及系统
CN110740266A (zh) 图像选帧方法、装置、存储介质及电子设备
WO2022227916A1 (zh) 图像处理方法、图像处理器、电子设备及存储介质
JP2004157778A (ja) 鼻位置の抽出方法、およびコンピュータに当該鼻位置の抽出方法を実行させるためのプログラムならびに鼻位置抽出装置
CN115037869A (zh) 自动对焦方法、装置、电子设备及计算机可读存储介质
KR20230064959A (ko) Ai 기반 객체인식을 통한 감시카메라 wdr 영상 처리
CN114302226A (zh) 一种视频画幅智能裁剪方法
JP7298709B2 (ja) パラメータ決定装置、パラメータ決定方法及び記録媒体
CN116095487A (zh) 图像防抖方法、装置、电子设备及计算机可读存储介质
KR20220107683A (ko) 전자 장치 및 이의 제어 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22794393

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22794393

Country of ref document: EP

Kind code of ref document: A1