CN110324534B - Image processing method and device and electronic equipment - Google Patents
Image processing method and device and electronic equipment Download PDFInfo
- Publication number
- CN110324534B CN110324534B CN201910621104.7A CN201910621104A CN110324534B CN 110324534 B CN110324534 B CN 110324534B CN 201910621104 A CN201910621104 A CN 201910621104A CN 110324534 B CN110324534 B CN 110324534B
- Authority
- CN
- China
- Prior art keywords
- offset
- image
- track
- processed
- pixel point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 13
- 238000000034 method Methods 0.000 claims abstract description 26
- 230000004927 fusion Effects 0.000 claims description 8
- 238000012935 Averaging Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 15
- 230000011514 reflex Effects 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The application discloses an image processing method, an image processing device and electronic equipment. Specifically, the method comprises the following steps: acquiring an image to be processed and an offset track; carrying out multiple times of offset processing on the image of the area around the offset track on the image to be processed along the offset track to obtain multiple offset images; carrying out image superposition on the plurality of offset images to obtain superposed images; acquiring an image of a target object from the image to be processed; and displaying the image of the target object at the corresponding position in the superposed image according to the offset track. According to the scheme, the image effect similar to that of rear curtain shooting of the single lens reflex camera can be achieved without depending on hardware configuration of the single lens reflex equipment.
Description
Technical Field
The present application relates to the field of image processing, and in particular, to an image processing method and apparatus, and an electronic device.
Background
The single-lens reflex equipment has a rear curtain synchronous shooting mode, the shooting mode utilizes a low-speed shutter to shoot a moving object at night, in the evening or in the room, and when shooting is carried out, the flash lamp is controlled to flash before the curtain is closed after exposure is finished, so that the shot object can be clearly shot while the moving ghost of the shot object is shot, and the illusion effect of main body clear background smear can be realized.
In the prior art, the realization of the rear curtain synchronization effect is mainly realized by hardware, that is, the flash lamp parameters and the flashing time of the single-lens reflex device need to be controlled, the method for obtaining the rear curtain synchronization effect has higher cost and higher technical requirements on users, and common users are difficult to shoot rear curtain synchronization images with better effect.
Disclosure of Invention
In order to overcome at least the above-mentioned deficiencies in the prior art, it is an object of the present application to provide an image processing method, comprising:
acquiring an image to be processed and an offset track;
carrying out multiple times of offset processing on the image of the area around the offset track on the image to be processed along the offset track to obtain multiple offset images;
carrying out image superposition on the plurality of offset images to obtain superposed images;
acquiring an image of a target object from the image to be processed;
and displaying the image of the target object at the corresponding position in the superposed image according to the offset track.
Optionally, the step of performing multiple offset processing on the image of the to-be-processed image located in the area around the offset trajectory along the offset trajectory to obtain multiple offset images includes:
according to different offsets, calculating pixel values of all pixel points in the image of the area around the offset track under the offset, and obtaining the offset image, wherein the offset is the total offset distance of the area around the offset track.
Optionally, the step of calculating a pixel value of each pixel point after the deviation along the deviation track in the image located in the area around the deviation track under the deviation amount to obtain the deviation image includes:
calculating and obtaining an offset weight corresponding to each pixel point in the image to be processed according to the offset track, wherein the offset weight is the offset proportion of the pixel point, and the offset weight of the pixel point closer to the offset track is larger;
calculating the product of the offset and the offset weight of each pixel point according to different offsets;
calculating offset pixel points corresponding to all the pixel points in the image to be processed according to the product of the offset and the offset weight values of all the pixel points;
and acquiring the pixel value of the offset pixel point corresponding to each pixel point to obtain an offset image.
Optionally, the step of calculating a pixel value of each pixel point after the deviation along the deviation track in the image located in the area around the deviation track under the deviation amount to obtain the deviation image includes:
obtaining an offset weight of each pixel point which is within a preset distance range from the offset track, wherein the offset weight is an offset proportion of the pixel point, and the closer the offset track is, the larger the offset weight of the pixel point is;
calculating the product of the offset and the offset weight of each pixel point according to different offsets
Calculating offset pixel points corresponding to all the pixel points in the image to be processed according to the product of the offset and the offset weight values of all the pixel points;
and acquiring the pixel value of the offset pixel point corresponding to each pixel point to obtain an offset image.
Optionally, the step of acquiring the image of the target object includes:
acquiring a template image comprising the outline of a target object;
and acquiring the image of the target object from the image to be processed according to the template image.
Optionally, the image overlaying the plurality of offset images, and the obtaining an overlaid image includes:
and averaging pixel values of pixel points positioned at the coordinates of the same column and the same row in each offset image to obtain a superposed image.
Optionally, the method further comprises:
acquiring a preset offset;
and calculating the product of the offset and a preset offset multiple to obtain a new offset.
Optionally, the step of displaying the image of the target object at the corresponding position in the superimposed image according to the offset trajectory includes:
acquiring a first position of the target object in the image to be processed;
determining a second position in the superimposed image corresponding to the first position;
displaying the image of the target object at the second position in the superimposed image such that the image of the target object is located at the end of the offset trajectory;
wherein the position of the target object in the image to be processed is consistent with the position of the tail end of the offset track.
Another object of the present application is to provide an image processing apparatus, comprising:
the first acquisition module is used for acquiring an image to be processed and an offset track;
the offset module is used for carrying out offset processing on the image of the area around the offset track on the image to be processed for multiple times along the offset track to obtain multiple offset images;
the superposition module is used for carrying out image superposition on the plurality of offset images to obtain superposed images;
the second acquisition module is used for acquiring an image of a target object from the image to be processed;
and the fusion module is used for displaying the image of the target object at the corresponding position in the superposed image according to the offset track.
Another object of the present application is to provide an electronic device, which includes an image capturing unit, a memory and a processor, wherein the image capturing unit is communicatively connected to the memory and the processor respectively, the memory is communicatively connected to the processor, the memory stores executable instructions, and the processor implements the method according to any one of the above when executing the executable instructions.
Compared with the prior art, the method has the following beneficial effects:
according to the image processing method and device and the electronic equipment, the image around the track on the image to be processed is subjected to multiple times of offset along the offset track, so that multiple offset images are obtained, the superposed image is obtained according to the multiple offset images, and finally the image of the target object in the image to be processed is displayed at the corresponding position in the superposed image, so that the image similar to the single-lens reflex rear-curtain synchronous shooting effect can be obtained. Since the method can be realized by an image processing method, no special configuration of hardware is required.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a block diagram schematically illustrating a structure of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a first flowchart illustrating an image processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a second image processing method according to an embodiment of the present application;
fig. 4 is a schematic diagram of a triangulation result provided in the embodiment of the present application;
fig. 5 is a third schematic flowchart of an image processing method provided in the embodiment of the present application;
FIG. 6 is an exemplary diagram of an image to be processed provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of an offset trajectory provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a template image provided by an embodiment of the present application;
FIG. 9 is a graph of the effect of the offset provided by the embodiments of the present application;
fig. 10 is a structure of an image processing apparatus according to an embodiment of the present application.
Icon: 100-an electronic device; 110-an image processing device; 111-a first acquisition module; 112-an offset module; 113-a superposition module; 114-a second acquisition module; 115-a fusion module; 120-a memory; 130-a processor; 140-image acquisition unit.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present application, it is further noted that, unless expressly stated or limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application, where the electronic device 100 includes an image processing apparatus 110, a memory 120, and a processor 130, the memory 120 is communicatively connected to the processor 130 to implement data interaction, and the memory 120 stores executable instructions therein. The image processing apparatus 110 includes at least one software function module which may be stored in the memory 120 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the electronic device 100. The processor 130 is used for executing executable modules stored in the memory 120, such as software functional modules and computer programs included in the image processing apparatus 110.
Optionally, in this embodiment, the electronic device 100 further includes an image capturing unit 140, and the image capturing unit 140 is connected to the memory 120 and the processor 130 respectively.
Referring to fig. 2, fig. 2 is a first flowchart illustrating an image processing method according to an embodiment of the present invention, where the image processing method can be implemented by the electronic device 100, and the method includes steps S110 to S150. The steps S110 to S150 will be described in detail below.
Step S110, an image to be processed and an offset trajectory are acquired.
In this embodiment, the image to be processed is an image that needs to be processed into a rear curtain synchronization effect, and may be an image acquired by various image acquisition devices, for example, an image shot in an earlier stage by a camera, a mobile phone, or the like. Of course, the image to be processed may also be an image captured by the image capturing device in real time.
In this embodiment, the offset trajectory is a trajectory of a shadow in the image to be processed, which is required to generate the back curtain synchronization effect. Wherein the offset trajectory may be set by a user.
For example, the user may obtain the offset trajectory set by the user through a touch screen device or the like. Specifically, an area is set on the touch screen as a canvas area, and then the offset trajectory is drawn in the canvas area by a user, so that a first image comprising the offset trajectory is obtained. In this embodiment, the offset trajectory may be determined by a function input by the user, or the like.
And step S120, carrying out multiple times of offset processing on the image of the area around the offset track on the image to be processed along the offset track to obtain multiple offset images.
Specifically, according to different offsets, pixel values of all pixel points in an image of a region around an offset track under the offset are calculated, and the offset image is obtained, wherein the pixel values are pixel values of the pixel points after the pixel points are offset along the offset track, and the offset is the total offset distance of the region around the offset track.
That is, for each amount of shift, the image of the area around the shift trajectory is shifted once along the shift trajectory using the amount of shift.
Referring to fig. 3, optionally, the step of calculating the pixel value of each pixel point in the image located in the area around the offset trajectory under the offset amount after the offset of each pixel point along the offset trajectory, and obtaining the offset image includes steps S1211 to S1214.
Step S1211, calculating an offset weight corresponding to each pixel point in the image to be processed according to the offset trajectory, where the offset weight is an offset ratio of the pixel point, and the offset weight of the pixel point closer to the offset trajectory is larger.
The method for calculating the offset weight of each pixel point on the image to be processed is explained in detail below by combining with the drawing of the offset track by the user.
Taking the canvas area and the to-be-processed image as an example, since the canvas area can be the same as the to-be-processed image, the first image and the to-be-processed image are the same in size, and at this time, each pixel point on the canvas corresponds to a pixel point on the to-be-processed image, which is located in the same row and the same column as the pixel point. The positions of the pixel points where the deviation tracks drawn by the user pass on the canvas are consistent with the positions of the pixel points where the deviation tracks pass on the image to be processed. The offset track comprises a plurality of sub-tracks, and each sub-track is a part which is positioned on the same straight line and is continuous on the offset track.
After obtaining the first image, the first image is first triangulated according to each sub-trajectory in the first image.
In this embodiment, for convenience of processing, after obtaining the first image, the trajectory in the first image may be converted into a plurality of vectors, and then a new first image may be generated according to the converted vectors, where the size of the new first image may be a size of the old first image after being enlarged or reduced by a certain ratio. That is, the triangulation may be performed on the new first image after the scaling process.
Referring to fig. 4, the triangulation method is illustrated in detail by taking an example that the offset trajectory passes through two vertices of the first image. In the graph, a rectangular frame represents the edge of a first image, a track in the rectangular frame is a deviation track, the deviation track passes through a vertex at the upper left corner and a vertex at the lower right corner in the rectangular frame, each sub-track of the deviation track is sequentially used as a side of a triangle, one of the vertex at the lower left corner and the vertex at the upper right corner of the rectangular frame is respectively used as a vertex of the triangle, and subdivision is performed, so that a subdivision result that the smallest units subdivided in the first image are all triangles can be obtained. For example, if the sub-trajectories of the offset trajectory are T1, T2, T3, T4, T5, T6 and T7, the vertex of the lower left corner is Q, and the vertex of the upper right corner is O, then after triangulation, T1, T2, T3, T4, T5, T6 and T7 respectively form a triangle with Q, and T1, T2, T3, T4, T5, T6 and T7 respectively form a triangle with O.
In this embodiment, each of the sub-tracks corresponds to an initial offset weight, and the initial offset weight is an original offset degree of the sub-track.
After triangulation is carried out, for each pixel point, calculating a quotient of a first distance between the pixel point and the sub-track corresponding to the pixel point and a second distance between a vertex, far away from the sub-track, of the triangle where the pixel point is located and the sub-track, and obtaining a distance weight of the pixel point, wherein the distance weight is the influence degree of the pixel point on the target object.
And the sub-track corresponding to the pixel point is the sub-track in the edge of the triangle where the pixel point is located.
And finally, calculating the offset weight according to the distance weight and the initial offset weight.
Referring to fig. 4, for example, the pixel point P is located in a triangle formed by the sub-trajectory T4 and the vertex O, the first distance between the pixel point P and the sub-trajectory T4 is l1, and the second distance between the pixel point O and the sub-trajectory T4 is l2, so that the distance weight W of the pixel point P is l1/l 4. After the distance weight of the pixel point is obtained, the offset weight of the pixel point can be calculated according to the distance weight of the pixel point.
The initial offset weight includes a first offset component and a second offset component, where the first offset component and the second offset component represent offset degrees of the pixel point in a preset first direction and a preset second direction, respectively, and the step of calculating the offset weight according to the distance weight and the initial offset weight includes: firstly, calculating the product of the first offset component and the offset weight to obtain a third offset component; then calculating the product of the second offset component and the offset weight to obtain a fourth offset component; and finally, obtaining the offset weight according to the third offset component and the fourth offset component, wherein the offset weight comprises a fifth offset component and a sixth offset component.
For example, for each sub-track, an initial offset weight (X, Y) may be set, where X represents an offset weight component of the sub-track in the horizontal axis direction, i.e., a first offset component, and Y represents an offset weight component of the sub-track in the vertical axis direction, i.e., a second offset component, and in this embodiment, the total amount of X and Y may be set to 1, that is, X + Y is 1. The initial offset weight is the original offset degree of the sub-track, and at this time, still taking the pixel point P as an example, the offset weight, that is, (X, Y) × W, can be calculated according to the triangulated first image.
When the size of the first image used for subdivision is consistent with that of the image to be processed, then the fifth offset component is equal to the third offset component, and the sixth offset component is equal to the fourth offset component.
When the size of the first image used for subdivision is not consistent with the size of the image to be processed, in this embodiment, before the step of obtaining the offset weight according to the third offset component and the fourth offset component, the third offset component and the fourth offset component of each pixel point in the first image may also be configured as a second image. In this way, when the offset weight is obtained according to the third offset component and the fourth offset component, the second image may be scaled first to obtain a third image with the same size as the to-be-processed image, where each pixel point of the third image includes the fifth offset component and the sixth offset component.
For example, when the third offset component and the fourth offset component constitute the second image, the third offset component and the fourth offset component corresponding to each pixel point may be set as R, G two channel values of the R, G, B channel, and the channel value of the other channel B may be set to 0. Therefore, the second image can be obtained, after the second image is obtained, the second image is processed by adopting the existing image scaling method, so that the third image can be obtained, and the channel value of the R, G, B channel in the third image is obtained by calculating the R, G, B channel of each pixel point in the second image. That is, the channel value of the R channel of each pixel point in the third image represents the fifth offset component, and the channel value of the G channel of each pixel point in the third image represents the sixth offset component.
Referring to fig. 3, in step S1212, a product of the offset and the offset weight of each pixel is calculated according to different offsets.
Optionally, before step S1212, the method further includes obtaining a plurality of different offsets.
Optionally, in this embodiment, the obtaining of the plurality of different offsets specifically includes obtaining a preset offset first. And then calculating the product of the offset and a preset offset multiple to obtain a new offset. After a new offset is obtained every time, the product of the offset and the preset offset multiple is repeatedly calculated, and a plurality of different offsets are obtained by repeating the calculation for a plurality of times. In this embodiment, the number of times of repeatedly performing the above steps may be preset.
For example, the preset offset is w, the preset offset multiple is 1.02, and the preset number of times of repeatedly calculating the offset is 3, then the obtained offsets are w, 1.02 × w, 1.02^2 × w, and 1.02^3 × w, respectively.
Step S1213, calculating offset pixel points corresponding to each pixel point in the image to be processed.
Specifically, the offset pixel corresponding to each pixel in the image to be processed is calculated according to the product of the offset and the offset weight of each pixel.
The position of the pixel point in the first direction is a first position component, the position of the pixel point in the second direction is a second position component, and the step of calculating the pixel value after the offset according to the preset offset and the offset weight of the pixel point comprises the following steps: firstly, acquiring a third offset component and a fourth offset component of the pixel point; then, calculating the product of the fifth offset component and the preset offset to obtain a first intermediate component, calculating the sum of the first intermediate component and the first position component to obtain a first offset coordinate; calculating the product of the sixth offset component and the preset offset to obtain a second intermediate component; calculating the sum of the first intermediate component and the first position component to obtain a second offset coordinate; and acquiring pixel values of the positions corresponding to the first offset coordinate and the second offset coordinate to obtain the pixel value of the pixel point after offset.
For example, the offset is w, the position of the pixel point P is (x, y), the pixel value P (x, y) of the pixel point P, and the fifth offset component is PxThe sixth offset component is pyThen, at this offset, the pixel value of pixel point (x, y) is:
step S1214, obtaining the pixel value of the offset pixel corresponding to each pixel, and obtaining an offset image.
Referring to fig. 5, optionally, the step of calculating the pixel value of each pixel point in the image located in the area around the offset trajectory under the offset amount after the pixel point is offset along the offset trajectory to obtain the offset image includes steps S1221 to S1224.
Step S1221, obtaining an offset weight of each pixel point within a preset distance range from the offset track, where the offset weight is an offset proportion of the pixel point, and the offset weight of the pixel point closer to the offset track is larger.
In this embodiment, the offset weight may be set manually.
Step S1222, according to the different offsets, calculating the product of the offset and the offset weight of each pixel.
Step S1223, calculating offset pixels corresponding to each pixel in the image to be processed.
Specifically, the offset pixel corresponding to each pixel in the image to be processed is calculated according to the product of the offset and the offset weight of each pixel.
Step S1224, obtain the pixel value of the offset pixel corresponding to each pixel, and obtain an offset image.
For the detailed process of step S1222 to step S1223, please refer to step S1212 to step S1213, which will not be described herein again.
Referring to fig. 2, in step S130, the offset images are superimposed to obtain a superimposed image.
Optionally, in step S130, the pixel values of the pixel points located at the same column and the same row in each offset image are averaged to obtain the superimposed image.
For example, the offset image a, the offset image B, and the offset image C are offset images corresponding to different offset amounts, respectively, and in the offset image a, the pixel value of each pixel is aijIn the offset image B, the pixel value of each pixel point is BijIn the offset image C, the pixel value of each pixel point is CijWherein i represents the number of rows of the pixel points in the image, and j represents the number of columns of the pixel points in the image, then, the pixel value D of each pixel point of the superimposed image D is calculated according to the offset image A, the offset image B and the offset image CijIs Aij、BijAnd CijSo that the superimposed image D can be obtained.
Thus, n shifted user graphs are overlaid to obtain an average value, P' (x, y) is a pixel value of a pixel point at a (x, y) position in the final average result graph, and a calculation formula is as follows:
wherein,the value of the k-th shift result image in pixel (x, y) is shown, and n is the number of shift images in the superimposed image.
Step S140, acquiring an image of the target object from the image to be processed.
In this embodiment, the method for obtaining the image of the target object from the image to be processed may adopt a target identification method, and identify the target object from the image to be processed through the trained target identification model, so as to obtain the image of the target object from the image to be processed according to the identified target object.
In this embodiment, the image of the target object may also be obtained in a template manner, specifically, a template image including a contour of the target object is first obtained, and then the image of the target object is obtained from the image to be processed according to the template image. The acquisition of the template image including the contour of the target object can be obtained by the following two ways:
in the first way, the mask is directly and manually applied by the user.
In the second method, the obtained shift result map is subjected to image segmentation, for example, when the target object is a person, a human body segmentation method may be adopted, so as to obtain a template image.
And S150, displaying the image of the target object at the corresponding position in the superposed image according to the offset track.
In this embodiment, the superimposed image, the image to be processed, and the template image may be fused in an image fusion manner, for example, if alpha fusion is adopted, the obtained final result image is:
P1(x,y)=P(x,y)*Alpha+P′(x,y)*(1-Alpha)
wherein, P1(x,y)Is the pixel value, P, of the pixel point with (x, y) in the final image(x,y)Is the pixel value of a pixel point with (x, y) in the image to be processed, P'(x,y)The pixel value of the pixel point with (x, y) in the superimposed image is obtained. Alpha is a transparency, for example, the transparency of a pixel point corresponding to the target object is 1, and the transparency corresponding to pixel points at other positions is 0.
Taking image segmentation performed on the obtained shift result map as an example, optionally, in this embodiment, the step of displaying the image of the target object at a corresponding position in the superimposed image according to the shift trajectory includes: and acquiring a first position of the target object in the image to be processed. Determining a second location in the overlay image corresponding to the first location. Displaying the image of the target object at the second position in the superimposed image such that the image of the target object is located at the end of the offset trajectory.
Wherein the position of the target object in the image to be processed is consistent with the position of the tail end of the offset track.
Referring to fig. 6, fig. 7, fig. 8 and fig. 9, regarding the processing effect of this embodiment, fig. 6 is an image to be processed, fig. 7 is an offset track for offsetting the image to be processed, fig. 8 is a template image drawn by a user, after the image to be processed of fig. 6, the offset track of fig. 7 and the template image of fig. 8 are superimposed, the obtained effect graph is as shown in fig. 9, as can be seen from fig. 9, the definition of a person image in the image is high, and a blurring effect along the offset track direction obviously exists in the background.
Referring to fig. 10, an embodiment of the present application further provides an image processing apparatus 110, which includes a first obtaining module 111, a shifting module 112, a superimposing module 113, a second obtaining module 114, and a fusing module 115. The image processing apparatus 110 includes a software function module which can be stored in the memory 120 in the form of software or firmware or solidified in an Operating System (OS) of the electronic device 100.
Specifically, the first obtaining module 111 is configured to obtain an image to be processed and an offset trajectory.
The first obtaining module 111 in this embodiment is used to execute step S110, and the detailed description about the first obtaining module 111 may refer to the description about step S110.
And the offset module 112 is configured to perform multiple offset processing on the image, located in the area around the offset trajectory, of the to-be-processed image along the offset trajectory to obtain multiple offset images.
The offset module 112 in this embodiment is used to execute step S120, and the detailed description about the offset module 112 may refer to the description of step S120.
And the superimposing module 113 is configured to superimpose the plurality of offset images to obtain a superimposed image.
The superimposing module 113 in this embodiment is configured to execute step S130, and the detailed description about the superimposing module 113 may refer to the description of step S130.
And a second obtaining module 114, configured to obtain an image of the target object from the image to be processed.
The second obtaining module 114 in this embodiment is configured to execute step S140, and the detailed description about the second obtaining module 114 may refer to the description about step S140.
And a fusion module 115, configured to display the image of the target object at a corresponding position in the superimposed image according to the offset trajectory.
The fusion module 115 in this embodiment is configured to execute step S150, and the detailed description about the fusion module 115 may refer to the description of step S150.
In summary, in the scheme provided in this embodiment, the images to be processed are shifted to different degrees along the track, so as to obtain the shift maps corresponding to the shift degrees, and finally, the shift maps, the images to be processed and the template image are fused, so as to finally obtain the image with the shooting effect similar to the single-lens reflex rear curtain. In this way, hardware devices such as single-reflex flash and a rear curtain are not required to be relied on, and an image similar to the single-reflex rear curtain shooting effect can be shot by adopting a common image acquisition device, so that the technical requirements on users are low, and even a person who does not know the single-reflex device can also shoot the image similar to the single-reflex rear curtain shooting effect.
The above description is only for various embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and all such changes or substitutions are included in the scope of the present application. Therefore, the protection scope of the present application is subject to the protection scope of the claims.
Claims (9)
1. An image processing method, characterized in that the method comprises:
acquiring an image to be processed and an offset track;
carrying out multiple times of offset processing on the image of the area around the offset track on the image to be processed along the offset track to obtain multiple offset images;
carrying out image superposition on the plurality of offset images to obtain superposed images;
acquiring a template image comprising the outline of a target object;
acquiring an image of the target object from the image to be processed according to the template image;
and displaying the image of the target object at the corresponding position in the superposed image according to the offset track.
2. The method according to claim 1, wherein the step of performing multiple offset processes on the image of the to-be-processed image in the area around the offset trajectory along the offset trajectory, and obtaining multiple offset images comprises:
according to different offsets, calculating pixel values of all pixel points in the image of the area around the offset track under the offset, and obtaining the offset image, wherein the offset is the total offset distance of the area around the offset track.
3. The method according to claim 2, wherein the step of calculating the pixel value of each pixel point after the deviation along the deviation track in the image of the area around the deviation track under the deviation amount to obtain the deviation image comprises:
calculating and obtaining an offset weight corresponding to each pixel point in the image to be processed according to the offset track, wherein the offset weight is the offset proportion of the pixel point, and the offset weight of the pixel point closer to the offset track is larger;
calculating the product of the offset and the offset weight of each pixel point according to different offsets;
calculating offset pixel points corresponding to all the pixel points in the image to be processed according to the product of the offset and the offset weight values of all the pixel points;
and acquiring the pixel value of the offset pixel point corresponding to each pixel point to obtain an offset image.
4. The method according to claim 2, wherein the step of calculating the pixel value of each pixel point after the deviation along the deviation track in the image of the area around the deviation track under the deviation amount to obtain the deviation image comprises:
obtaining an offset weight of each pixel point which is within a preset distance range from the offset track, wherein the offset weight is an offset proportion of the pixel point, and the closer the offset track is, the larger the offset weight of the pixel point is;
calculating the product of the offset and the offset weight of each pixel point according to different offsets;
calculating offset pixel points corresponding to all the pixel points in the image to be processed according to the product of the offset and the offset weight values of all the pixel points;
and acquiring the pixel value of the offset pixel point corresponding to each pixel point to obtain an offset image.
5. The method according to any one of claims 1-4, wherein the step of image-superimposing the plurality of offset images, obtaining superimposed images comprises:
and averaging pixel values of pixel points positioned at the coordinates of the same column and the same row in each offset image to obtain a superposed image.
6. The method according to any one of claims 1-4, further comprising:
acquiring a preset offset;
and calculating the product of the offset and a preset offset multiple to obtain a new offset.
7. The method according to any one of claims 1-4, wherein the step of displaying the image of the target object at the corresponding position in the overlay image according to the offset trajectory comprises:
acquiring a first position of the target object in the image to be processed;
determining a second position in the superimposed image corresponding to the first position;
displaying the image of the target object at the second position in the superimposed image such that the image of the target object is located at the end of the offset trajectory;
wherein the position of the target object in the image to be processed is consistent with the position of the tail end of the offset track.
8. An image processing apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring an image to be processed and an offset track;
the offset module is used for carrying out offset processing on the image of the area around the offset track on the image to be processed for multiple times along the offset track to obtain multiple offset images;
the superposition module is used for carrying out image superposition on the plurality of offset images to obtain superposed images;
the second acquisition module is used for acquiring a template image comprising the outline of the target object; acquiring an image of the target object from the image to be processed according to the template image;
and the fusion module is used for displaying the image of the target object at the corresponding position in the superposed image according to the offset track.
9. An electronic device, comprising an image capturing unit, a memory and a processor, wherein the image capturing unit is communicatively connected to the memory and the processor, respectively, and the memory is communicatively connected to the processor, wherein the memory stores executable instructions, and the processor implements the method according to any one of claims 1 to 7 when executing the executable instructions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910621104.7A CN110324534B (en) | 2019-07-10 | 2019-07-10 | Image processing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910621104.7A CN110324534B (en) | 2019-07-10 | 2019-07-10 | Image processing method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110324534A CN110324534A (en) | 2019-10-11 |
CN110324534B true CN110324534B (en) | 2021-08-20 |
Family
ID=68123169
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910621104.7A Active CN110324534B (en) | 2019-07-10 | 2019-07-10 | Image processing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110324534B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111145309B (en) * | 2019-12-18 | 2023-07-28 | 深圳市万翼数字技术有限公司 | Image superposition method and related equipment |
CN112541867B (en) * | 2020-12-04 | 2024-08-09 | Oppo(重庆)智能科技有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN112672056A (en) * | 2020-12-25 | 2021-04-16 | 维沃移动通信有限公司 | Image processing method and device |
CN113923368B (en) * | 2021-11-25 | 2024-06-18 | 维沃移动通信有限公司 | Shooting method and device |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102811352A (en) * | 2011-06-03 | 2012-12-05 | 卡西欧计算机株式会社 | Moving image generating method and moving image generating apparatus |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049922B (en) * | 2012-12-05 | 2015-08-19 | 深圳深讯和科技有限公司 | Move axle special efficacy image generating method and device |
US9445015B2 (en) * | 2014-02-20 | 2016-09-13 | Google Inc. | Methods and systems for adjusting sensor viewpoint to a virtual viewpoint |
CN104159033B (en) * | 2014-08-21 | 2016-01-27 | 努比亚技术有限公司 | A kind of optimization method of shooting effect and device |
-
2019
- 2019-07-10 CN CN201910621104.7A patent/CN110324534B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102811352A (en) * | 2011-06-03 | 2012-12-05 | 卡西欧计算机株式会社 | Moving image generating method and moving image generating apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN110324534A (en) | 2019-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110324534B (en) | Image processing method and device and electronic equipment | |
CN107292965B (en) | Virtual and real shielding processing method based on depth image data stream | |
JP6561216B2 (en) | Generating intermediate views using optical flow | |
CN105374019B (en) | A kind of more depth map fusion methods and device | |
JP4508049B2 (en) | 360 ° image capturing device | |
EP3067861A2 (en) | Determination of a coordinate conversion parameter | |
CN107038722A (en) | Equipment positioning method and device | |
CN109873997A (en) | Projected picture correcting method and device | |
KR20170102521A (en) | Image processing method and apparatus | |
CN104134235B (en) | Real space and the fusion method and emerging system of Virtual Space | |
CN107798702A (en) | A kind of realtime graphic stacking method and device for augmented reality | |
CN107358609B (en) | Image superposition method and device for augmented reality | |
CN109448105B (en) | Three-dimensional human body skeleton generation method and system based on multi-depth image sensor | |
JP7484055B2 (en) | Method, device, storage medium, electronic device, and computer program for generating panoramic image with depth information | |
CN107798704A (en) | A kind of realtime graphic stacking method and device for augmented reality | |
CN105516579A (en) | Image processing method and device and electronic equipment | |
CN110375765B (en) | Visual odometer method, system and storage medium based on direct method | |
CN104427230A (en) | Reality enhancement method and reality enhancement system | |
US20180322671A1 (en) | Method and apparatus for visualizing a ball trajectory | |
WO2017206824A1 (en) | Position information processing method and system for use in smart display of visual images | |
US10606149B2 (en) | Information processing device, information processing method, and program | |
CN107886101A (en) | A kind of scene three-dimensional feature point highly effective extraction method based on RGB D | |
JP6486603B2 (en) | Image processing device | |
CN208506731U (en) | Image display systems | |
JP2013171522A (en) | System and method of computer graphics image processing using ar technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |