CN111614905A - Image processing method, image processing device and electronic equipment - Google Patents

Image processing method, image processing device and electronic equipment Download PDF

Info

Publication number
CN111614905A
CN111614905A CN202010474632.7A CN202010474632A CN111614905A CN 111614905 A CN111614905 A CN 111614905A CN 202010474632 A CN202010474632 A CN 202010474632A CN 111614905 A CN111614905 A CN 111614905A
Authority
CN
China
Prior art keywords
image
target
images
frames
moving object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010474632.7A
Other languages
Chinese (zh)
Inventor
王家伟
李睿涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010474632.7A priority Critical patent/CN111614905A/en
Publication of CN111614905A publication Critical patent/CN111614905A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, belongs to the field of image processing and aims to solve the problem of poor imaging effect of a shot image. The method comprises the following steps: acquiring M frames of images, wherein each frame of image in the M frames of images comprises a first moving object; acquiring a first moving object in each frame of the first image, and relative to target parameters of the first moving object in the second image, wherein the target parameters comprise displacement and rotation amount; performing image fusion on the M frames of images according to the first moving object and the target parameter to obtain a target image; the first image is the other image except the second image in the M frames of images, and the second image is one frame of image in the M frames of images. The method and the device are applied to shooting of motion blurred image scenes.

Description

Image processing method, image processing device and electronic equipment
Technical Field
The present application relates to the field of image processing, and in particular, to an image processing method, an image processing apparatus, and an electronic device.
Background
At present, the shooting function of electronic devices is getting more and more powerful, and users can shoot images with special effects through the electronic devices, for example, some special motion-blurred images can be shot.
In general, after pressing the shutter, the user can photograph a subject moving relative to the electronic device to present a real scene and a motion-blurred image of a virtual scene relative to a subject still to the electronic device in such a manner that the moving speed of the electronic device is kept relatively the same as the moving speed of the subject being photographed.
However, in an actual shooting scene, it is difficult for the user to determine not only the moving speed of the object to be shot, but also to keep the electronic device consistent with the moving speed of the moving object, which may result in poor imaging of the shot image.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, an image processing device and electronic equipment, which can solve the problem of poor imaging effect of a shot image.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including: acquiring M frames of images, wherein each frame of image in the M frames of images comprises a first moving object; acquiring a first moving object in each frame of the first image, and relative to target parameters of the first moving object in the second image, wherein the target parameters comprise displacement and rotation amount; performing image fusion on the M frames of images according to the first moving object and the target parameter to obtain a target image; the first image is the other image except the second image in the M frames of images, the second image is one frame of image in the M frames of images, and M is an integer larger than 1.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: the system comprises an acquisition module and an image fusion module; the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring M frames of images, and each frame of image in the M frames of images comprises a first moving object; acquiring a first moving object in each frame of the first image, and relative to target parameters of the first moving object in the second image, wherein the target parameters comprise displacement and rotation amount; the image fusion module is used for carrying out image fusion on the M frames of images according to the first moving object and the target parameter acquired by the acquisition module to obtain a target image; the first image is the other image except the second image in the M frames of images, the second image is one frame of image in the M frames of images, and M is an integer larger than 1.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the present application, the image processing apparatus may acquire the first moving object in the first image for each frame, with respect to the target parameters (i.e., displacement and amount of rotation) of the first moving object in the second image (one frame image out of the M frame images), after acquiring the M frame images each including the first moving object. Then, the image processing apparatus may perform image fusion on the M frames of images according to the first moving object and the target parameter to obtain a target image. The first image is the other image except the second image in the M frame images. Through the scheme, under the condition that a user needs to shoot a first moving object which moves to present a target image of a real scene, compared with the scheme that the movement speed of the electronic equipment is required to be kept consistent with that of the first moving object to shoot in the related technology, the shooting method and the shooting device do not need the user to move the electronic equipment, and only the first moving object needs to acquire M frames of images, so that the shooting difficulty can be simplified. On the other hand, the image processing apparatus may determine, after acquiring the M-frame images including the first moving object, a displacement and a rotation amount of the first moving object in the first image with respect to the first moving object in the second image per frame with reference to the second image in the M-frame images; and then carrying out image fusion on the M frames of images according to the first moving object in each frame of first image, the first moving object in the second image and the displacement and rotation amount of the first moving object in each frame of first image relative to the first moving object in the second image. The image processing device performs image fusion by the displacement and rotation amount of the first moving object in the first image relative to the first moving object in the second image in each frame during the image fusion of the M frames of images. Therefore, the first moving object in each frame of the first image and the first moving object in each frame of the second image can be completely fused, so that the clear first moving object can be reserved in the target image, and the motion blurred image with the special effect can be shot. Compared with the situation that shooting fails due to the fact that the motion speed of the electronic equipment cannot be kept consistent with that of the first moving object and shooting needs to be repeated continuously in the related art, the image processing device in the application can directly obtain the target image of the first moving object presenting the real scene through algorithm processing, shooting steps are simplified, and the imaging effect of the shot target image is improved.
Drawings
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic interface diagram applied to an image processing method according to an embodiment of the present disclosure;
fig. 3 is a second schematic interface diagram applied to an image processing method according to an embodiment of the present disclosure;
fig. 4 is a third schematic interface diagram applied to an image processing method according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 7 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. The objects distinguished by "first", "second", and the like are usually a class, and the number of the objects is not limited, and for example, the first object may be one or a plurality of objects. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The image processing method provided by the embodiment of the application can be applied to various shooting scenes, for example, the shooting scene of a running automobile, the shooting scene of a running person, the shooting scene of a racing bicycle, the shooting scene of a hunting lion and the like.
Taking the example of shooting a driving automobile scene, when a user needs to shoot a driving automobile as a real scene and other objects present motion blurred images of a virtual scene, the user can collect multi-frame images of the driving automobile through electronic equipment at a fixed position. Then, after acquiring the multi-frame image, the electronic device may select the image a as a reference, and calculate the displacement and the rotation angle of the car in each of the other frames of the multi-frame image from the car in the image a. Then, the electronic device can perform image fusion on the multiple frames of images according to the displacement and the rotation angle of the automobile in each frame of image, the automobile in each frame of image and the automobile in the image A, so that the fused images keep clear automobile patterns. Finally, the electronic device can obtain a motion-blurred image that the driving automobile is in a real scene, and other objects present a virtual scene. Therefore, the shooting difficulty can be simplified, and the imaging effect of the shot image can be improved.
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application, including steps 201 to 203:
step 201: the image processing apparatus acquires M frames of images.
Each frame of the M frames of images comprises a first moving object, and M is an integer greater than 1.
In this embodiment, the M frames of images are acquired for the first moving object, and the M frames of images may be acquired by a camera of the image processing apparatus. The camera may be a camera carried by the image processing device, or may be a camera externally connected to the image processing device, which is not limited in the embodiment of the present application.
For example, the M-frame image may be a color image, a grayscale image, or a binary image, which is not limited in this embodiment of the application.
In the embodiment of the present application, the first moving object may be a moving object. For example, the first moving object may be a moving person, a moving animal, or a moving vehicle, which may be determined according to actual needs, and this is not limited in this embodiment of the application.
The number of the first moving objects may be one or a plurality of the first moving objects, and the embodiment of the present application is not limited thereto.
Step 202: the image processing device acquires a first moving object in the first image of each frame and a target parameter relative to the first moving object in the second image.
The first image is the other image except the second image in the M frames of images, the second image is one frame of image in the M frames of images, and the target parameters comprise displacement and rotation amount.
In this embodiment of the present application, the second image may be selected by default by the system, or may be selected by the user, which is not limited in this embodiment of the present application.
In one example, the image processing apparatus may take an image of the first moving object in a preset area as the second image. The preset area may be any area in the image. For example, the image processing apparatus may take an image in which the first moving object is in the middle area as the second image.
For example, the target parameters acquired by the image processing apparatus may be: the image processing device calculates the result.
In one example, the image processing apparatus may calculate the displacement of the first moving object in the first image relative to the first moving object in the second image in each frame by using a Lucas-kanade (lk) optical flow method, a mediathreshod bitmap (MTB) model method, or the like. Wherein the displacement comprises a moving direction and a moving distance.
In another example, the image processing apparatus may calculate an amount of rotation of the first moving object in the first image relative to the first moving object in the second image per frame using the acquired gyro data. For example, the gyro data is integrated and summed to calculate the rotation amount. Wherein the rotation amount may include at least one of: rotation angle, rotation arc length, rotation angular velocity.
For example, as shown in fig. 2, the image processing apparatus acquires 3 frames of images, namely, image 1 (i.e., 31 in (a) in fig. 2), image 2 (i.e., 32 in (b) in fig. 2), and image 3 (i.e., 33 in (c) in fig. 2), respectively. If the image processing apparatus determines that the car in the image 2 (i.e., the first moving object) is in the middle area of the image, the image 2 may be determined as the second image. Then, the image processing apparatus may acquire that the car in image 1 has moved the distance 1 to the left with respect to the car in image 2 with reference to image 2, and calculate that the car in image 3 has moved the distance 2 to the right with respect to the car in image 2.
The image processing apparatus may record and store the acquired target parameter.
Step 203: and the image processing device performs image fusion on the M frames of images according to the first moving object and the target parameter to obtain a target image.
In the embodiment of the present application, the first moving object is included in the target image. It should be noted that the target image may include a complete and clear first moving object, that is, the first moving object in the target image represents a real scene.
It should be noted that, in the embodiment of the present application, the image fusion may be understood as: the pixel values of the images are superimposed, or subtracted, or covered (replaced), which may be specifically set according to actual requirements, and this is not limited in this embodiment of the application.
It should be noted that the image fusion in the embodiment of the present application includes, but is not limited to, the three fusion methods described above, and may be specifically set according to actual requirements.
Alternatively, in the embodiment of the present application, after the image processing apparatus obtains the target image, the target image may be displayed.
For example, in conjunction with fig. 2, after the image processing apparatus calculates that the car in image 1 has moved a distance 1 to the left with respect to the car in image 2, and calculates that the car in image 3 has moved a distance 2 to the right with respect to the car in image 2, the image processing apparatus may perform image fusion on the 3-frame images according to the distance 1, the distance 2, and the car. At this time, as shown in fig. 3, the image processing apparatus displays an image 41 in which the automobile presents a real scene, a triangular object, and an elliptical object present a false scene.
It is understood that the triangular and elliptical objects present a ghost view, meaning that the triangular and elliptical objects present a ghost or blur effect.
In addition, when the image processing apparatus performs image fusion on the M-frame images based on the first moving object and the target parameter, an object or a stationary object (for example, the above-described triangular object or the above-described elliptical object) having a different motion speed from the first moving object in the M-frame images is blurred because the M-frame images cannot be completely fused.
Alternatively, the image processing apparatus may perform image fusion on the M-frame images through at least two possible implementations.
In a first possible implementation:
the image processing device can perform image fusion on every two frames of images in the M frames of images according to the first moving object and the target parameter to obtain M/2 frames of images, then perform image fusion on every two frames of images in the M/2 frames of images, and so on, perform image fusion on the last two frames of images.
If M is an odd number, a single frame image may be used as a frame image in the next image fusion.
For example, taking M equal to 3 as an example, the 3 frames of images are respectively image 1 to image 3, and the image processing apparatus may first perform image fusion on image 1 and image 2 according to the first moving object and the target parameter to obtain image a. Then, the image processing apparatus may perform image fusion again on the image a and the image 3 based on the first moving object and the target parameter.
In a second possible implementation:
for example, the image fusion of the M frames of images according to the first moving object and the target parameter in step 203 may specifically include the following steps 203 a:
step 203 a: and the image processing device adopts each frame of the first image to perform image fusion with the second image according to the first moving object and the target parameter.
It can be understood that the above-mentioned image fusion with the second image by using the first image of each frame means that: and performing image fusion on each frame of the first image and the second image by taking the second image as a reference.
The second image is used as a reference frame image, and the second image is used as a reference frame image.
For example, in conjunction with fig. 2, after the image processing apparatus calculates that the car in image 1 has moved a distance 1 to the left with respect to the car in image 2, and calculates that the car in image 3 has moved a distance 2 to the right with respect to the car in image 2, the image processing apparatus may image-fuse image 1, which has moved the distance 1 to the right, with image 2. Then, the image processing apparatus can move the image 3 by the distance 2 to the right, and perform image fusion with the image 2. Finally, the image processing apparatus can fuse the image 1 and the image 3 with reference to the image 2.
Alternatively, the image processing apparatus may perform image registration on the M frames of images according to the first moving object in the first image, the first moving object in the second image, and the target parameter in each frame of images. The image registration of the M frames of images is a precondition for image fusion of the M frames of images.
In one example, before performing image fusion on the M frames of images based on the first possible implementation manner, the image processing apparatus may perform image registration on each two frames of images before performing image fusion on each two frames of images in the M frames of images according to the first moving object and the target parameter.
In another example, before the image fusion is performed on the M-frame image in step 203, the method may further include the following step a 1:
step A1: and the image processing device carries out image registration on the first image and the second image of each frame according to the first moving object and the target parameter.
In one example, the image processing device may move or rotate the first image per frame with the target parameter, in registration with the second image.
In another example, the image processing apparatus may determine coordinates of the first image per frame of the first image and coordinates of the second image according to the target parameter, so that a correspondence relationship between the coordinates of the first image per frame and the coordinates of the second image may be established.
It should be noted that, in an ideal situation, the first moving object in the first image in each frame can be completely registered with the first moving object in the second image after being moved and rotated according to the target parameters.
In the image processing method provided by the embodiment of the application, after acquiring M frames of images each including a first moving object, the image processing apparatus may acquire the first moving object in the first image of each frame, and may acquire target parameters (i.e., displacement and rotation amount) of the first moving object in a second image (one frame of image in the M frames of images). Then, the image processing apparatus may perform image fusion on the M frames of images according to the first moving object and the target parameter to obtain a target image. The first image is the other image except the second image in the M frame images. Through the scheme, under the condition that a user needs to shoot a first moving object which moves to present a target image of a real scene, compared with the scheme that the movement speed of the electronic equipment is required to be kept consistent with that of the first moving object to shoot in the related technology, the shooting method and the shooting device do not need the user to move the electronic equipment, and only the first moving object needs to acquire M frames of images, so that the shooting difficulty can be simplified. On the other hand, the image processing apparatus may determine, after acquiring the M-frame images including the first moving object, a displacement and a rotation amount of the first moving object in the first image with respect to the first moving object in the second image per frame with reference to the second image in the M-frame images; and then carrying out image fusion on the M frames of images according to the first moving object in each frame of first image, the first moving object in the second image and the displacement and rotation amount of the first moving object in each frame of first image relative to the first moving object in the second image. The image processing device performs image fusion by the displacement and rotation amount of the first moving object in the first image relative to the first moving object in the second image in each frame during the image fusion of the M frames of images. Therefore, the first moving object in each frame of the first image and the first moving object in each frame of the second image can be completely fused, so that the clear first moving object can be reserved in the target image, and the motion blurred image with the special effect can be shot. Compared with the situation that shooting fails due to the fact that the motion speed of the electronic equipment cannot be kept consistent with that of the first moving object and shooting needs to be repeated continuously in the related art, the image processing device in the application can directly obtain the target image of the first moving object presenting the real scene through algorithm processing, shooting steps are simplified, and the imaging effect of the shot target image is improved.
Optionally, in this embodiment of the application, to highlight the first moving object and achieve the optical flow covering effect, the image processing apparatus may further process the fused image.
Illustratively, the step 203 may specifically include the following steps 203b and 203 c:
step 203 b: and the image processing device performs image fusion on the M frames of images to obtain a fifth image.
Step 203 c: the image processing device blurs the other area except the area of the first moving object in the fifth image to obtain a target image.
For example, the image processing apparatus may determine the area where the first moving object is located by acquiring coordinates of the first moving object in the fifth image.
For example, the image processing apparatus may use a blurring algorithm to blur other regions of the fifth image except for the region where the first moving object is located, and it is understood that blurring other regions of the fifth image except for the region where the first moving object is located is to perform blurring processing on the other regions.
Wherein the fuzzy algorithm may comprise at least one of: the gaussian fuzzy filtering algorithm, the mean fuzzy filtering algorithm and the laplacian fuzzy filtering algorithm can be set according to actual requirements, and the method is not limited in the application.
For example, referring to fig. 3, after the image processing device performs image fusion on the 3 frames of images to obtain an image 41 in which the automobile presents a real scene, and the triangular object and the elliptical object present a virtual scene, the image processing device may blur other regions except the region where the automobile is located in the image 41 through a gaussian fuzzy filtering algorithm.
The image processing method provided by the embodiment of the application can be applied to a scene of further processing the fused image, and the image processing device can achieve the purposes of highlighting the first moving object and blurring other areas by blurring other areas except the area where the first moving object is located in the fifth image, so that the imaging effect of the image is enhanced.
Optionally, in this embodiment of the application, in the process that the image processing apparatus performs image fusion with the second image by using each frame of the first image, the image processing apparatus may cover a pixel point in another frame of the image by using a vivid color pixel point in one frame of the image, so as to improve an imaging effect of the image.
For example, before the image fusion is performed on the M-frame image in the step 203, the method may further include the following step B1:
step B1: and the image processing device carries out binarization processing on the M frames of images according to a preset threshold value to obtain the M frames of images after binarization processing.
The preset threshold may be a preset pixel value.
For example, the preset threshold may be set by default in the system, may be set by a user, or may be calculated by the image processing apparatus, which is not limited in the embodiment of the present application.
For example, the preset threshold may be a mean value of pixel values calculated by the image processing apparatus according to pixel values of M frames of images, or may be a median value of pixel values calculated by the image processing apparatus according to pixel values of M frames of images, which is not limited in this embodiment of the application.
For example, the image processing apparatus may compare the pixel value of the pixel point in each frame of image in the M frames of images with a preset threshold, set the pixel value of the pixel point whose pixel value is greater than the preset threshold as a first pixel value, and set the pixel value of the pixel point whose pixel value is less than the preset threshold as a second pixel value.
For example, the first pixel value and the second pixel value may be any different values.
For example, taking the preset threshold as the median of pixel values of M frames of images as an example, the image processing device may compare the pixel values of the pixels in each frame of image in the M frames of images with the median of pixel values, and may set the pixel value of the pixel having the pixel value greater than the preset threshold as 1, and set the pixel value of the pixel having the pixel value less than the preset threshold as 0; or, the pixel value of the pixel point whose pixel value is greater than the preset threshold may be set to 0, and the pixel value of the pixel point whose pixel value is less than the preset threshold may be set to 1.
Based on the step B1, in the process that the image processing apparatus performs the image fusion on the M-frame image in the step 203, the image processing apparatus may compare the pixel value of the pixel point of any one of the first images after the binarization processing with the pixel value of the pixel point at the corresponding position in the second image after the binarization processing, if the pixel values are the same, the image processing apparatus performs the step 203a1, and if the pixel values are different, the image processing apparatus performs the step 203a2, wherein:
step 203a 1: and if the pixel value of the target pixel point in the third image is the same as the pixel value of the target pixel point in the second image after the binarization processing, the image processing device determines the pixel value of the target pixel point in the second image before the binarization processing as the pixel value of the target pixel point in the target image.
Step 203a 2: and if the pixel value of the target pixel point in the third image is different from the pixel value of the target pixel point in the second image after the binarization processing, determining a fourth image, and determining the pixel value of the target pixel point of the fourth image before the binarization processing as the pixel value of the target pixel point in the target image.
The third image is any one first image which is registered with the second image and subjected to binarization processing; and the fourth image is an image with a larger pixel value of the target pixel point in the third image and the binarized second image.
Illustratively, the target pixel point refers to a pixel point at the same position after the third image and the binarized second image are registered.
For example, taking the image 4 as the third image and the image 5 as the second image after the binarization processing as an example, the image processing apparatus obtains that the pixel value of the pixel point a in the image 4 is 1 and the pixel value of the corresponding pixel point B in the image 5 is 0, at this time, the image processing apparatus may change the pixel value 50 of the pixel point a in the image 4 before the binarization processing to the pixel value 10 of the pixel point B in the image 5 before the binarization processing, and then change the pixel value of the pixel point B in the image 5 before the binarization processing to 50; or, the image processing apparatus obtains that the pixel value of the pixel point a in the image 4 is 1, and the pixel value of the corresponding pixel point B in the image 5 is 1, at this time, the image processing apparatus may retain the pixel value of the pixel point B in the image 5 before the binarization processing. Or, the image processing apparatus obtains that the pixel value of the pixel point a in the image 4 is 0, and the pixel value of the corresponding pixel point B in the image 5 is 0, at this time, the image processing apparatus may retain the pixel value 0 of the pixel point B in the image 5 before the binarization processing. By analogy, all pixel points of the image 4 and the image 5 are compared, and finally the image 4 and the image 5 can be fused.
The image processing method provided by the embodiment of the application can be applied to the scene of the fused image, the image processing device can rapidly determine the pixel points with bright colors by carrying out binarization processing on the M frames of images, and rapidly realize the fusion of the M frames of images according to the M frames of images after the binarization processing, so that the imaging effect of the images is improved.
Optionally, in the embodiment of the present application, a simple local pixel coverage may not generate a smooth color line, and therefore, to ensure the beauty of the image, the motion trajectory of some objects may be color-filled.
For example, in the case that at least two frames of images among the M frames of images contain the second object, after the step 201 described above, the method may further include the following steps C1 to C3:
step C1: the image processing apparatus acquires target position information.
The target position information is the position information of the second object of each frame of image in at least two frames of images.
For example, the second object may be a moving object or a stationary object, which is not limited in this application. One or more second objects may be provided, which is not limited in this embodiment of the present application.
It should be noted that the moving object and the stationary object are determined according to the electronic device, that is, the second electronic device may be an object moving relative to the electronic device, or may be an object stationary relative to the electronic device.
In the case that the second object is a moving object, the second object may have a moving speed the same as that of the first moving object, or may have a moving speed different from that of the first moving object, which is not limited in this embodiment of the present application.
For example, the target position information may be coordinates of the second object, and may also be a displacement and a rotation amount of the second object relative to the first moving object, which is not limited in this embodiment of the present application.
Step C2: the image processing device determines the track of the second object in the target image according to the target position information.
For example, the image processing apparatus may determine the trajectory of the second object by determining the trajectory of pixel points of an area in which the second object is located in each frame of image.
Step C3: the image processing apparatus determines the target pixel value as a pixel value on a trajectory of the second object in the target image.
The target pixel value is a pixel value of a second object in a second image, and the second image is an image in the at least two frames of images.
The target pixel value may be a pixel value of the second object in the second image, or may be a pixel value of the second object in any one of the M frame images other than the second image, which is not limited in the embodiment of the present application.
Note that the image processing apparatus may perform steps C1 to C3 before performing image fusion on the M-frame images in step 203; step C1 to step C3 may also be executed in the process of performing image fusion on the M frames of images in step 203, which is not limited in the embodiment of the present application.
For example, referring to fig. 2, when the coordinates of the point a of the triangular object (i.e., the second object) in the image 1 are (10, 10), the coordinates of the point a of the triangular object in the image 2 (i.e., the second image) are (15, 10), and the coordinates of the point a of the triangular object in the image 3 are (20, 10), if the image processing apparatus determines that the triangular object is shifted only in the x axis according to the gyroscope data, the image processing apparatus may acquire that the amount of shift of the triangular object in the image 1 to the left in the x axis direction with respect to the triangular object in the image 2 is 5, and the amount of shift of the triangular object in the image 3 to the right in the x axis with respect to the triangular object in the image 2 is 5. If the amount of displacement of the car in image 1 (i.e. the first object mentioned above) to the left in the x-axis direction relative to the car in image 2 is 2 and the amount of displacement of the car in image 3 to the right in the x-axis direction relative to the car in image 2 is 2, then the image processing apparatus can determine that the coordinates (10, 10) of point a of the triangular object in image 1 correspond to the coordinates (12, 10) in image 2 and that the coordinates (20, 10) of point a of the triangular object in image 3 correspond to the coordinates (18, 10) in image 2 during the fusion of image 1 and image 3 based on image 2 by the car. Then, the image processing apparatus may determine the trajectory of the point a of the triangular object in the image after the image 1, the image 3, and the image 2 are fused as coordinates (12, 10) to coordinates (18, 10). Finally, the image processing apparatus may determine the pixel value corresponding to the point a of the triangular object in the image 2 as the pixel value corresponding to the entire straight line segment from the coordinates (12, 10) to the coordinates (18, 10). Finally, as shown in fig. 4, an image 51 including a triangular object with smooth lines can be obtained.
In the three-dimensional image, the displacement of the triangle object in the image 1 with respect to the triangle object in the image 2 and the displacement of the triangle object in the image 3 with respect to the triangle object in the image 2 can be calculated by the LK optical flow method or the MTB model method.
It should be noted that, if there are a plurality of second objects, calculating the displacement and rotation amount of each second object in each frame of the first image in the at least two frames of images increases the operation load of the image processing apparatus with respect to the second object in the second image, and therefore, to reduce the calculation amount and reduce the operation load of the image processing apparatus, the image processing apparatus may determine the trajectory of the second object in the target image according to the target parameters in the second image.
For example, if the coordinates of the point a of the triangular object in the image 2 (i.e., the second image) are (15, 10), and if the image processing apparatus determines from the gyroscope data that the triangular object is displaced only in the x-axis, and that the amount of displacement of the car in the image 1 (i.e., the first object) from the car in the image 2 to the left in the x-axis is 2, and the amount of displacement of the car in the image 3 from the car in the image 2 to the right in the x-axis is 2, the image processing apparatus may determine the trajectory of the point a of the triangular object in the image after the image 1 and the image 3 are fused with the image 2 as coordinates (13, 10) to (17, 10). Finally, the image processing apparatus may determine the pixel value corresponding to the point a of the triangular object in the image 2 as the pixel value corresponding to the entire straight line segment from the coordinates (13, 10) to the coordinates (17, 10).
In addition, the image processing apparatus may determine the trajectory of the second object in the target image according to the target position information, or may determine the trajectory of the second object in each frame of image in the at least two frames of images on each frame of image, and perform color overlay.
For example, referring to fig. 2, if the amount of displacement of the car in the image 1 to the left in the x-axis direction relative to the car in the image 2 is 6, and the amount of displacement of the car in the image 3 to the right in the x-axis direction relative to the car in the image 2 is 6, and when the start coordinate of the point a of the triangular object (i.e., the second object) in the image 1 is (10, 10) and the coordinate of the point a of the triangular object in the image 3 is (20, 10), the image processing apparatus determines that the triangular object is displaced only in the x-axis direction based on the gyro data. At this time, the image processing apparatus may determine the motion trajectory of the point a of the triangular object in the image 1 as moving from the coordinates (10, 10) to the coordinates (16, 10) according to the shift amount 6 of the automobile, and determine the motion trajectory of the point a of the triangular object in the image 3 as moving from the coordinates (20, 10) to the coordinates (16, 10). Then, the image processing apparatus may determine the pixel value of the point a of the triangular object in the image 1 as a pixel value corresponding to the entire straight line segment of coordinates (10, 10) to coordinates (16, 10), and the image processing apparatus may determine the pixel value of the point a of the triangular object in the image 3 as a pixel value corresponding to the entire straight line segment of coordinates (20, 10) to coordinates (16, 10). Finally, the image processing apparatus can fuse the image 1 and the image 3 with the image 2 as a reference to obtain an image of a triangular object having smooth lines.
In order to improve the image processing apparatus in the determination of the target image, the image processing apparatus may calculate the displacement and the rotation amount of the second object, determine the trajectory of the second object based on the actual displacement amount of the second object, and perform pixel value coverage. For example, if the start coordinate of the point a of the triangular object in the image 1 is (10, 10), the start coordinate of the point a of the triangular object in the image 2 is (14, 10), and the image processing apparatus calculates that the amount of displacement of the triangular object in the image 1 to the left in the x-axis direction with respect to the triangular object in the image 2 is 4, the motion trajectory of the point a of the triangular object in the image 1 is determined to be moved from the coordinates (10, 10) to the coordinates (14, 10).
The image processing method provided by the embodiment of the application can be applied to a scene with smooth lines of the second object, and the image processing device can carry out color coverage on the pixel values of the pixels on the track of the second object after the track of the second object, so that the lines of the second object are smooth, and the visual effect of the image is improved.
Optionally, in this embodiment of the application, when the image processing device acquires multiple frames of images, the image processing device may screen out M frames of images including a complete first moving object from the multiple frames of images, and perform subsequent image processing to improve the aesthetic measure of the target image.
For example, the step 201 may specifically include the following steps 201a to 201 c:
step 201 a: an image processing apparatus receives a first input from a user.
Wherein the first input may be: the click input of the user for the camera shutter, or the voice instruction input by the user, or the specific gesture input by the user may be specifically determined according to the actual use requirement, which is not limited in the embodiment of the present application.
The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure identification gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
Step 201 b: in response to the first input, the image processing apparatus acquires N frames of images for the first moving object.
Wherein, all or part of the images in the N frames of images contain the first moving object, the images containing the first moving object can contain all or part of the first moving object, and N is an integer greater than or equal to M.
It should be noted that, in the process of acquiring N frames of images, the image processing apparatus needs to strictly control exposure settings to ensure that the imaging effects of the images are similar. For example, the image processing apparatus may adjust the exposure setting every time an image is acquired for one frame in the process of acquiring N frames of images.
It will be appreciated that in an ideal situation, the first moving object moves from one side of the interface of the electronic device to just the other during the recording time to capture the N frames of images.
The recording time for acquiring the N frames of images may be default for the system and may be set by the user, which is not limited in the embodiment of the present application.
Step 201 c: the image processing apparatus determines an M-frame image from the N-frame images based on the first moving object.
For example, in the recording time for acquiring the N frames of images, it cannot be guaranteed that the first moving object moves from one side of the shooting interface of the image processing device to the other side, so that the image processing device can screen out an image with a relatively complete first moving object according to the feature point screening after acquiring the N frames of images, thereby determining the M frames of images. For example, at least 4 frames of images are determined.
Further, before step 201a, the user may touch the target key to control the image processing apparatus to be in a target shooting mode, where the target shooting mode is a mode in which the moving object is preferentially focused.
The target control may be an existing key or a newly added key, which is not limited in the embodiment of the present application.
In one example, the image processing apparatus may focus the moving object through an auto focus detection algorithm.
In another example, the image processing apparatus may manually focus the moving object by the user.
For example, as shown in fig. 2, when a user needs to capture an image of a moving object in a real scene, a "professional shooting" button may be clicked, and at this time, a shooting interface of the image processing apparatus is similar to a video shooting interface, and the user may aim the camera at an automobile that the user wants to shoot. Then, the user can judge the position of the automobile entering the shooting interface, and manually focus, and after the automobile enters the shooting interface, the image processing device can focus the automobile by using an automatic focus tracking detection technology and collect 6 frames of images. Then, the image processing device can screen according to the feature points, and finally screen 4 frames of images with complete automobile patterns.
It should be noted that, when a plurality of moving objects are included in the shooting interface, a moving object that is manually focused by the user is taken as the first moving object.
The image processing method provided by the embodiment of the application can be applied to the scene of image screening, the image processing device can screen out the M frames of images with the complete first moving object pattern after acquiring the N frames of images, then the image processing device can utilize the M frames of images to perform the next image processing, and further the imaging effect of the target image can be improved.
Optionally, in this embodiment of the present application, after step 201c, the method may further include: the image processing device can extract a first moving object in each frame of image in the M frames of images according to the feature point extraction method, and judge whether the first moving object has obvious motion according to the reference object comparison method, if yes, the image processing device continues to execute steps 201 to 203; if not, the image processing device may output the image after performing simple image registration on the M frames of images, or select an image with better image quality from the M frames of images as an output image.
The fact that the first moving object does not move obviously means that the image processing device detects that the first moving object does not move or detects that only the image processing device shakes to generate motion deviation.
Therefore, the image processing device can only perform the next image processing on the moving first moving object, and the flexibility of the image processing device is improved.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
Fig. 5 is a schematic diagram of a possible structure of an image processing apparatus for implementing the embodiment of the present application, and as shown in fig. 5, the image processing apparatus 600 includes: an acquisition module 601 and an image fusion module 602, wherein: an obtaining module 601, configured to obtain M frames of images, where each frame of image in the M frames of images includes a first moving object; acquiring a first moving object in each frame of the first image, and relative to target parameters of the first moving object in the second image, wherein the target parameters comprise displacement and rotation amount; the image fusion module 602 is configured to perform image fusion on the M-frame image according to the first moving object and the target parameter acquired by the acquisition module 601 to obtain a target image; the first image is the other image except the second image in the M frames of images, the second image is one frame of image in the M frames of images, and M is an integer larger than 1.
Optionally, as shown in fig. 5, the image processing apparatus 600 further includes: an image registration module 603; and an image matching module 603, configured to perform image registration on each frame of the first image and the second image according to the first moving object and the target parameter acquired by the acquisition module 601.
Optionally, as shown in fig. 5, the image processing apparatus 600 further includes: an image binarization module 604; an image binarization module 604, configured to perform binarization processing on the M-frame image according to a preset threshold value to obtain an M-frame image after the binarization processing; the image fusion module 602 is specifically configured to: if the pixel value of the target pixel point in the third image is the same as the pixel value of the target pixel point in the second image after the binarization processing, determining the pixel value of the target pixel point in the second image before the binarization processing as the pixel value of the target pixel point in the target image; or if the pixel value of the target pixel point in the third image is different from the pixel value of the target pixel point in the second image after the binarization processing, determining the fourth image, and determining the pixel value of the target pixel point of the fourth image before the binarization processing as the pixel value of the target pixel point in the target image; the third image is any one first image which is registered with the second image and subjected to binarization processing; and the fourth image is an image with a larger pixel value of the target pixel point in the third image and the binarized second image.
Optionally, as shown in fig. 5, the image processing apparatus 600 further includes: a determination module 605; the obtaining module 601 is further configured to obtain target position information when at least two frames of images in the M frames of images include a second object, where the target position information is position information of the second object in each frame of image in the at least two frames of images; a determining module 605, configured to determine a track of a second object in the target image according to the target position information acquired by the acquiring module 601; and determining the target pixel value as a pixel value on a trajectory of a second object in the target image; the target pixel value is a pixel value of a second object in the second image, and the second image is an image in at least two frames of images.
Optionally, as shown in fig. 5, the image processing apparatus 600 further includes: a blurring module 606; the image fusion module 602 is specifically configured to perform image fusion on the M frames of images to obtain a fifth image; a blurring module 606, configured to blur other regions, except for the region where the first moving object is located, in the fifth image obtained by the image fusion module 602, to obtain a target image.
It should be noted that, as shown in fig. 5, modules that are necessarily included in the electronic device 600 are indicated by solid line boxes, such as an obtaining module 601; modules that may or may not be included in the electronic device 600 are illustrated with dashed boxes, such as a blurring module 606.
With the image processing apparatus according to the embodiment of the present application, after acquiring M frames of images each including a first moving object, the image processing apparatus may acquire the first moving object in the first image for each frame, and may acquire target parameters (i.e., displacement and rotation amount) of the first moving object in a second image (one frame of image out of the M frames of images). Then, the image processing apparatus may perform image fusion on the M frames of images according to the first moving object and the target parameter to obtain a target image. The first image is the other image except the second image in the M frame images. Through the scheme, under the condition that a user needs to shoot a first moving object which moves to present a target image of a real scene, compared with the scheme that the movement speed of the electronic equipment is required to be kept consistent with that of the first moving object to shoot in the related technology, the shooting method and the shooting device do not need the user to move the electronic equipment, and only the first moving object needs to acquire M frames of images, so that the shooting difficulty can be simplified. On the other hand, the image processing apparatus may determine, after acquiring the M-frame images including the first moving object, a displacement and a rotation amount of the first moving object in the first image with respect to the first moving object in the second image per frame with reference to the second image in the M-frame images; and then carrying out image fusion on the M frames of images according to the first moving object in each frame of first image, the first moving object in the second image and the displacement and rotation amount of the first moving object in each frame of first image relative to the first moving object in the second image. The image processing device performs image fusion by the displacement and rotation amount of the first moving object in the first image relative to the first moving object in the second image in each frame during the image fusion of the M frames of images. Therefore, the first moving object in each frame of the first image and the first moving object in each frame of the second image can be completely fused, so that the clear first moving object can be reserved in the target image, and the motion blurred image with the special effect can be shot. Compared with the situation that shooting fails due to the fact that the motion speed of the electronic equipment cannot be kept consistent with that of the first moving object and shooting needs to be repeated continuously in the related art, the image processing device in the application can directly obtain the target image of the first moving object presenting the real scene through algorithm processing, shooting steps are simplified, and the imaging effect of the shot target image is improved.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a kiosk, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiments of fig. 1 to fig. 4, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 6, an electronic device 700 is further provided in this embodiment of the present application, and includes a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and executable on the processor 701, where the program or the instruction is executed by the processor 701 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The input unit 104 is configured to acquire M frames of images, where each frame of image in the M frames of images includes a first moving object; a processor 110 for acquiring a first moving object in the first image of each frame, and target parameters including displacement and rotation amount relative to the first moving object in the second image; performing image fusion on the M frames of images according to the first moving object and the target parameter to obtain a target image; the first image is the other image except the second image in the M frames of images, the second image is one frame of image in the M frames of images, and M is an integer larger than 1.
Optionally, the processor 110 is configured to perform image registration on each frame of the first image with the second image according to the first moving object and the target parameter.
Optionally, the processor 110 is specifically configured to: performing binarization processing on the M frames of images according to a preset threshold value to obtain M frames of images after binarization processing; if the pixel value of the target pixel point in the third image is the same as the pixel value of the target pixel point in the second image after the binarization processing, determining the pixel value of the target pixel point in the second image before the binarization processing as the pixel value of the target pixel point in the target image; or if the pixel value of the target pixel point in the third image is different from the pixel value of the target pixel point in the second image after the binarization processing, determining the fourth image, and determining the pixel value of the target pixel point of the fourth image before the binarization processing as the pixel value of the target pixel point in the target image; the third image is any one first image which is registered with the second image and subjected to binarization processing; and the fourth image is an image with a larger pixel value of the target pixel point in the third image and the binarized second image.
Optionally, the processor 110 is further configured to, in a case that at least two frames of images in the M frames of images include a second object, obtain target position information, where the target position information is position information of the second object in each frame of image in the at least two frames of images; determining the track of a second object in the target image according to the target position information; and determining the target pixel value as a pixel value on a trajectory of a second object in the target image; the target pixel value is a pixel value of a second object in the second image, and the second image is an image in at least two frames of images.
Optionally, the processor 110 is specifically configured to perform image fusion on the M frames of images to obtain a fifth image; and blurring other areas except the area where the first moving object is located in the fifth image to obtain a target image.
With the electronic device provided by the embodiment of the application, after acquiring M frames of images each including a first moving object, the electronic device may acquire the first moving object in the first image of each frame, and may acquire target parameters (i.e., displacement and rotation amount) of the first moving object in the second image (one frame of image in the M frames of images). Then, the electronic device may perform image fusion on the M frames of images according to the first moving object and the target parameter to obtain a target image. The first image is the other image except the second image in the M frame images. Through the scheme, under the condition that a user needs to shoot a first moving object which moves to present a target image of a real scene, compared with the scheme that the movement speed of the electronic equipment is required to be kept consistent with that of the first moving object to shoot in the related technology, the shooting method and the shooting device do not need the user to move the electronic equipment, and only the first moving object needs to acquire M frames of images, so that the shooting difficulty can be simplified. On the other hand, after acquiring the M frames of images containing the first moving object, the electronic device may determine, with reference to the second image of the M frames of images, a displacement and a rotation amount of the first moving object in the first image with respect to the first moving object in the second image per frame; and then carrying out image fusion on the M frames of images according to the first moving object in each frame of first image, the first moving object in the second image and the displacement and rotation amount of the first moving object in each frame of first image relative to the first moving object in the second image. The electronic device performs image fusion through the displacement and the rotation amount of the first moving object in the first image relative to the first moving object in the second image in each frame during the process of performing image fusion on the M frames of images. Therefore, the first moving object in each frame of the first image and the first moving object in each frame of the second image can be completely fused, so that the clear first moving object can be reserved in the target image, and the motion blurred image with the special effect can be shot. Compared with the situation that shooting fails due to the fact that the motion speed of the electronic equipment and the motion speed of the first moving object cannot be kept consistent in the related art, and shooting needs to be repeated continuously, the electronic equipment in the application can directly obtain the target image of the first moving object presenting the real scene through algorithm processing, shooting steps are simplified, and the imaging effect of the shot target image is improved.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the graphics processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. An image processing method, characterized in that the method comprises:
acquiring M frames of images, wherein each frame of image in the M frames of images comprises a first moving object;
acquiring target parameters of the first moving object in each frame of first image relative to the first moving object in a second image, wherein the target parameters comprise displacement and rotation amount;
performing image fusion on the M frames of images according to the first moving object and the target parameter to obtain a target image;
the first image is an image other than the second image in the M-frame images, the second image is one of the M-frame images, and M is an integer greater than 1.
2. The method of claim 1, wherein prior to image fusing the M-frame images, the method further comprises:
and carrying out image registration on each frame of the first image and the second image according to the first moving object and the target parameter.
3. The method of claim 2, wherein prior to image fusing the M-frame images, the method further comprises:
performing binarization processing on the M frames of images according to a preset threshold value to obtain M frames of images after binarization processing;
the image fusion of the M frame images comprises:
if the pixel value of the target pixel point in the third image is the same as the pixel value of the target pixel point in the second image after the binarization processing, determining the pixel value of the target pixel point in the second image before the binarization processing as the pixel value of the target pixel point in the target image;
alternatively, the first and second electrodes may be,
if the pixel value of the target pixel point in the third image is different from the pixel value of the target pixel point in the second image after binarization processing, determining a fourth image, and determining the pixel value of the target pixel point of the fourth image before binarization processing as the pixel value of the target pixel point in the target image;
the third image is any one first image which is registered with the second image and subjected to binarization processing; and the fourth image is an image with a larger pixel value of a target pixel point in the third image and the binarized second image.
4. The method according to claim 1, wherein at least two of the M frames of images contain a second object; after the acquiring of the M frames of images, the method further comprises:
acquiring target position information, wherein the target position information is position information of a second object of each frame of image in the at least two frames of images;
determining the track of the second object in the target image according to the target position information;
determining a target pixel value as a pixel value on a trajectory of the second object in the target image;
wherein the target pixel value is a pixel value of the second object in the second image, and the second image is an image of the at least two frame images.
5. The method according to claim 1, wherein the image fusing the M-frame images to obtain a target image comprises:
carrying out image fusion on the M frames of images to obtain a fifth image;
blurring other areas except the area where the first moving object is located in the fifth image to obtain the target image.
6. An image processing apparatus characterized by comprising: the system comprises an acquisition module and an image fusion module;
the acquisition module is used for acquiring M frames of images, and each frame of image in the M frames of images comprises a first moving object; acquiring target parameters of the first moving object in each frame of first image relative to the first moving object in a second image, wherein the target parameters comprise displacement and rotation amount;
the image fusion module is used for carrying out image fusion on the M frames of images according to the first moving object and the target parameter acquired by the acquisition module to obtain a target image;
the first image is an image other than the second image in the M-frame images, the second image is one of the M-frame images, and M is an integer greater than 1.
7. The image processing apparatus according to claim 6, characterized by further comprising: an image registration module; and the image matching module is used for carrying out image registration on each frame of the first image and the second image according to the first moving object and the target parameter acquired by the acquisition module.
8. The image processing apparatus according to claim 7, characterized by further comprising: an image binarization module; the image binarization module is used for carrying out binarization processing on the M frames of images according to a preset threshold value to obtain M frames of images after binarization processing;
the image fusion module is specifically configured to:
if the pixel value of the target pixel point in the third image is the same as the pixel value of the target pixel point in the second image after the binarization processing, determining the pixel value of the target pixel point in the second image before the binarization processing as the pixel value of the target pixel point in the target image;
alternatively, the first and second electrodes may be,
if the pixel value of the target pixel point in the third image is different from the pixel value of the target pixel point in the second image after binarization processing, determining a fourth image, and determining the pixel value of the target pixel point of the fourth image before binarization processing as the pixel value of the target pixel point in the target image;
the third image is any one first image which is registered with the second image and subjected to binarization processing; and the fourth image is an image with a larger pixel value of a target pixel point in the third image and the binarized second image.
9. The image processing apparatus according to claim 6, characterized by further comprising: a determination module;
the obtaining module is further configured to obtain target position information when at least two frames of images in the M frames of images include a second object, where the target position information is position information of the second object in each frame of image in the at least two frames of images;
the determining module is configured to determine a track of the second object in the target image according to the target position information acquired by the acquiring module; and determining a target pixel value as a pixel value on a trajectory of the second object in the target image;
wherein the target pixel value is a pixel value of the second object in the second image, and the second image is an image of the at least two frame images.
10. The image processing apparatus according to claim 6, characterized by further comprising: a blurring module;
the image fusion module is specifically configured to perform image fusion on the M frames of images to obtain a fifth image;
the blurring module is configured to blur other regions of the fifth image obtained by the image fusion module except for the region where the first moving object is located, so as to obtain the target image.
11. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 5.
12. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any one of claims 1 to 5.
CN202010474632.7A 2020-05-29 2020-05-29 Image processing method, image processing device and electronic equipment Pending CN111614905A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010474632.7A CN111614905A (en) 2020-05-29 2020-05-29 Image processing method, image processing device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010474632.7A CN111614905A (en) 2020-05-29 2020-05-29 Image processing method, image processing device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111614905A true CN111614905A (en) 2020-09-01

Family

ID=72203894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010474632.7A Pending CN111614905A (en) 2020-05-29 2020-05-29 Image processing method, image processing device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111614905A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184722A (en) * 2020-09-15 2021-01-05 上海传英信息技术有限公司 Image processing method, terminal and computer storage medium
CN112215877A (en) * 2020-10-29 2021-01-12 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and readable storage medium
CN112541867A (en) * 2020-12-04 2021-03-23 Oppo(重庆)智能科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113038055A (en) * 2021-01-27 2021-06-25 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN113240577A (en) * 2021-05-13 2021-08-10 北京达佳互联信息技术有限公司 Image generation method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1465034A (en) * 2001-06-05 2003-12-31 索尼公司 Image processor
US20180018813A1 (en) * 2016-07-12 2018-01-18 Pixar Method and system for modeling light scattering in a participating medium
CN109819163A (en) * 2019-01-23 2019-05-28 努比亚技术有限公司 A kind of image processing control, terminal and computer readable storage medium
CN110084765A (en) * 2019-05-05 2019-08-02 Oppo广东移动通信有限公司 A kind of image processing method, image processing apparatus and terminal device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1465034A (en) * 2001-06-05 2003-12-31 索尼公司 Image processor
US20180018813A1 (en) * 2016-07-12 2018-01-18 Pixar Method and system for modeling light scattering in a participating medium
CN109819163A (en) * 2019-01-23 2019-05-28 努比亚技术有限公司 A kind of image processing control, terminal and computer readable storage medium
CN110084765A (en) * 2019-05-05 2019-08-02 Oppo广东移动通信有限公司 A kind of image processing method, image processing apparatus and terminal device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184722A (en) * 2020-09-15 2021-01-05 上海传英信息技术有限公司 Image processing method, terminal and computer storage medium
CN112215877A (en) * 2020-10-29 2021-01-12 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and readable storage medium
CN112541867A (en) * 2020-12-04 2021-03-23 Oppo(重庆)智能科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113038055A (en) * 2021-01-27 2021-06-25 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN113038055B (en) * 2021-01-27 2023-06-23 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN113240577A (en) * 2021-05-13 2021-08-10 北京达佳互联信息技术有限公司 Image generation method and device, electronic equipment and storage medium
CN113240577B (en) * 2021-05-13 2024-03-15 北京达佳互联信息技术有限公司 Image generation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111614905A (en) Image processing method, image processing device and electronic equipment
CN107888845B (en) Video image processing method and device and terminal
CN112637500B (en) Image processing method and device
CN112532881B (en) Image processing method and device and electronic equipment
CN113794829B (en) Shooting method and device and electronic equipment
CN112492215B (en) Shooting control method and device and electronic equipment
CN112333382B (en) Shooting method and device and electronic equipment
CN113194253A (en) Shooting method and device for removing image reflection and electronic equipment
CN113207038B (en) Video processing method, video processing device and electronic equipment
CN114422692A (en) Video recording method and device and electronic equipment
CN112738405B (en) Video shooting method and device and electronic equipment
CN112437231B (en) Image shooting method and device, electronic equipment and storage medium
CN112367465A (en) Image output method and device and electronic equipment
CN107105158B (en) Photographing method and mobile terminal
CN114650370A (en) Image shooting method and device, electronic equipment and readable storage medium
CN112637588B (en) Method and device for detecting contamination of camera and electronic equipment
CN114339051A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN115499589A (en) Shooting method, shooting device, electronic equipment and medium
CN113794831A (en) Video shooting method and device, electronic equipment and medium
CN113315903B (en) Image acquisition method and device, electronic equipment and storage medium
CN114285922A (en) Screenshot method, screenshot device, electronic equipment and media
CN113923392A (en) Video recording method, video recording device and electronic equipment
CN113873147A (en) Video recording method and device and electronic equipment
CN112672056A (en) Image processing method and device
CN113014799A (en) Image display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200901

RJ01 Rejection of invention patent application after publication