CN110619615A - Method and apparatus for processing image - Google Patents
Method and apparatus for processing image Download PDFInfo
- Publication number
- CN110619615A CN110619615A CN201811635344.4A CN201811635344A CN110619615A CN 110619615 A CN110619615 A CN 110619615A CN 201811635344 A CN201811635344 A CN 201811635344A CN 110619615 A CN110619615 A CN 110619615A
- Authority
- CN
- China
- Prior art keywords
- image
- fusion
- target
- initial
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 230000004927 fusion Effects 0.000 claims abstract description 395
- 230000009466 transformation Effects 0.000 claims abstract description 46
- 230000004044 response Effects 0.000 claims abstract description 26
- 238000001514 detection method Methods 0.000 claims abstract description 15
- 238000000844 transformation Methods 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 9
- 238000010586 diagram Methods 0.000 description 11
- 230000008859 change Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 4
- 239000011521 glass Substances 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the application discloses a method and a device for processing images. One embodiment of the method comprises: acquiring and displaying a target image and a preset fusion image set; determining a fusion image for fusion with a target image from the fusion image set as an initial fusion image; determining a target point corresponding to the image for initial fusion and displaying, wherein the target point is used for simultaneously carrying out at least two kinds of geometric transformation on the image for initial fusion; in response to the detection of the movement operation of the user for the target point, performing at least two kinds of geometric transformation on the initial image for fusion based on the movement operation to obtain a target image for fusion; and fusing the target fusion image and the target image to generate a fused image. According to the embodiment, at least two kinds of geometric transformation can be simultaneously carried out on the image for fusion based on the target point of the image for fusion, so that the efficiency and diversity of image processing are improved.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for processing images.
Background
At present, in the process of editing an image, in order to make the image more vivid and beautiful, a user usually adds some special effects to the image. Specifically, the technician will typically preset special effects for adding to the image, such as sunglasses, hats, etc. Furthermore, the user can select a special effect from preset special effects to add to the image and synthesize a new image.
Disclosure of Invention
The embodiment of the application provides a method and a device for processing an image.
In a first aspect, an embodiment of the present application provides a method for processing an image, where the method includes: acquiring and displaying a target image and a preset fusion image set; determining a fusion image for fusion with a target image from the fusion image set as an initial fusion image; determining a target point corresponding to the image for initial fusion and displaying, wherein the target point is used for simultaneously carrying out at least two kinds of geometric transformation on the image for initial fusion; in response to the detection of the movement operation of the user for the target point, performing at least two kinds of geometric transformation on the initial image for fusion based on the movement operation to obtain a target image for fusion; and fusing the target fusion image and the target image to generate a fused image.
In some embodiments, the set of images for fusion comprises images for dynamic fusion; and determining a fusion image for fusion with the target image from the fusion image set as an initial fusion image, including: and selecting the image for dynamic fusion from the image set for fusion as an image for initial fusion.
In some embodiments, determining a fusion image for fusion with a target image from a set of fusion images as an initial fusion image comprises: in response to detecting a user's selection operation for an image for fusion in the set of images for fusion, the image for fusion selected by the user is determined as an image for initial fusion.
In some embodiments, fusing the target fusion image and the target image to generate a fused image includes: and in response to the detection of the movement operation of the user on the target fusion image, fusing the moved target fusion image and the target image to generate a fused image.
In some embodiments, the fusion images in the set of fusion images include preset fusion points; and fusing the target image and the image for target fusion to generate a fused image, wherein the fused image comprises: identifying the target image to obtain fusion points included in the target image; the target fusion image is added to the target image so that the fusion points included in the target fusion image overlap with the fusion points included in the target image, and a fused image is generated.
In a second aspect, an embodiment of the present application provides an apparatus for processing an image, the apparatus including: an image acquisition unit configured to acquire and display a target image and a preset fusion image set; a first determination unit configured to determine, as an initial image for fusion, an image for fusion to be used for fusion with a target image from a set of images for fusion; the second determining unit is configured to determine a target point corresponding to the initial image for fusion and display, wherein the target point is used for simultaneously performing at least two kinds of geometric transformation on the initial image for fusion; an image transformation unit configured to perform at least two kinds of geometric transformations on the initial image for fusion based on a movement operation in response to detection of the movement operation of the user for the target point, to obtain a target image for fusion; and an image fusion unit configured to fuse the target fusion image and the target image and generate a fused image.
In some embodiments, the image fusion unit is further configured to: and in response to the detection of the movement operation of the user on the target fusion image, fusing the moved target fusion image and the target image to generate a fused image.
In some embodiments, the fusion images in the set of fusion images include preset fusion points; and the image fusion unit includes: the image recognition module is configured to recognize the target image and obtain fusion points included by the target image; and the image adding module is configured to add the target fusion image into the target image, so that the fusion point included by the target fusion image is superposed with the fusion point included by the target image, and generate a fused image.
In a third aspect, an embodiment of the present application provides a terminal, including: one or more processors; a storage device having one or more programs stored thereon; a display screen configured to display an image; when executed by one or more processors, cause the one or more processors to implement a method of any of the embodiments of the method for processing images described above.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the method of any of the above-described methods for processing an image.
The method and the device for processing the images, provided by the embodiment of the application, can be used for acquiring and displaying the target image and the preset fusion image set, then, a fusion image for fusion with the target image is determined from the fusion image set as an initial fusion image, a target point corresponding to the initial fusion image is determined and displayed, wherein the target point is used for simultaneously performing at least two kinds of geometric transformation on the image for initial fusion, and further responding to the movement operation of the user for the target point, based on the movement operation, at least two kinds of geometric transformation are carried out on the image for initial fusion to obtain an image for target fusion, finally the image for target fusion and the target image are fused to generate a fused image, therefore, at least two kinds of geometric transformation can be simultaneously carried out on the image for fusion based on the target point of the image for fusion, and the efficiency and diversity of image processing are improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for processing an image according to the present application;
FIG. 3 is a schematic illustration of an application scenario of a method for processing an image according to an embodiment of the present application;
FIG. 4 is a flow diagram of yet another embodiment of a method for processing an image according to the present application;
FIG. 5 is a schematic block diagram of one embodiment of an apparatus for processing images according to the present application;
fig. 6 is a schematic structural diagram of a computer system suitable for implementing a terminal device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the method for processing images or the apparatus for processing images of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as an image processing application, a graphical user interface (MEG) software, a search application, a social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, and 103 are hardware, they may be various electronic devices with a display screen, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a background server that supports the fusion image displayed on the terminal devices 101, 102, 103. The background server may send the image for fusion to the terminal device, so that the terminal device may perform processing such as analysis on data such as the target image by using the image for fusion, and obtain a processing result (e.g., an image after fusion).
It should be noted that the method for processing the image provided by the embodiment of the present application is generally executed by the terminal devices 101, 102, 103, and accordingly, the apparatus for processing the image is generally disposed in the terminal devices 101, 102, 103.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In the case where data used in generating the fused image does not need to be acquired from a remote location, the system architecture described above may not include a network and a server, but only a terminal device.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for processing an image according to the present application is shown. The method for processing the image comprises the following steps:
step 201, acquiring and displaying a target image and a preset image set for fusion.
In this embodiment, an executing subject (for example, the terminal devices 101, 102, 103 shown in fig. 1) of the method for processing images may acquire the target image and the preset fusion image set by a wired connection manner or a wireless connection manner, and display the target image and the preset fusion image set. Wherein the target image is an image to be processed. The fusion image in the fusion image set may be an image that is set in advance by a technician and is to be fused with the target image. Specifically, the fusion image may be an image (for example, a hat image, a glasses image, or the like) for decorating a target image (for example, a face image).
In this embodiment, the executing entity may obtain a target image pre-stored locally, or may obtain a target image sent by another electronic device (for example, the server 105 shown in fig. 1) communicatively connected to the executing entity; similarly, the execution main body may acquire a fusion image set stored locally in advance, or may acquire a fusion image set transmitted from another electronic device communicatively connected to the execution main body.
In step 202, a fusion image to be fused with a target image is determined as an initial fusion image from the fusion image set.
In the present embodiment, based on the fusion image set obtained in step 201, the execution subject may determine a fusion image to be fused with the target image from the fusion image set as an initial fusion image.
Specifically, the executing body may determine, as the initial fusion image, a fusion image to be fused with the target image from the fusion image set by various methods. For example, the execution subject may select the image for fusion from the set of images for fusion as the initial image for fusion in a random selection manner.
In some optional implementations of the embodiment, the executing body may determine, in response to detecting a selection operation of the user on an image for fusion in the set of images for fusion, the image for fusion selected by the user as the initial image for fusion. The selecting operation may be a click operation, a voice input operation, or the like. Here, the voice information input by the user through the voice input operation may be used as a voice instruction for selecting an image for fusion.
In practice, after the initial fusion image is determined, the initial fusion image may be displayed at the target position on the display screen of the execution subject, so that the initial fusion image may be adjusted subsequently. Here, the target position may be a predetermined position or a position determined based on the target image and the initial fusion image.
As an example, the image for initial fusion is a glasses image; the target image is a face image. The execution subject may perform image recognition on the target image, and determine the position of the eyes of the face corresponding to the target image in the target image. Further, the execution subject may determine the determined position of the eye in the target image as a target position of the image for initial fusion displayed on the display screen. It should be noted that, the face image recognition is a well-known technology which is widely researched and applied at present, and is not described herein again.
Step 203, determining a target point corresponding to the image for initial fusion and displaying.
In this embodiment, based on the initial fusion image obtained in step 202, the execution subject may determine a target point and a display corresponding to the initial fusion image. Wherein the target points are used for simultaneously performing at least two geometrical transformations on the image for initial fusion. In practice, the geometric transformation may include rotation, scaling, translation, flipping, etc. The at least two geometric transformations corresponding to the target point may include any two or more of the geometric transformations listed above. As an example, the at least two geometric transformations corresponding to the target point may include rotation and scaling.
Specifically, the target point may be a preset point set in advance. For example, preset points for performing at least two types of geometric transformations on the fusion image are set in advance for the fusion image in the fusion image set, and the preset points of the initial fusion image are target points. Alternatively, the target point may be a point generated randomly after the initial fusion image is displayed. Still alternatively, the target point may be a point that the user clicks on the display screen.
In practice, after determining and displaying the target point, the execution entity may perform the at least two geometric transformations on the initial fusion image simultaneously in response to determining the position of the target point to send the change. Specifically, the manner in which at least two geometric transformations are performed on the initial fusion image can be determined by the pre-change position and the post-change position of the target point.
And 204, responding to the movement operation of the user for the target point, and performing at least two kinds of geometric transformation on the initial image for fusion based on the movement operation to obtain the image for target fusion.
In this embodiment, the execution subject may perform at least two kinds of geometric transformations on the initial image for fusion based on the movement operation in response to detection of the movement operation of the user with respect to the target point, to obtain the target image for fusion. The target fusion image is an image that has been changed and is to be fused with the target image.
It is understood that the user's movement operation with respect to the target point may change the position of the target point. The movement start point of the movement operation is the pre-change position of the target point, and the movement end point of the movement operation is the post-change position of the target point. Further, the execution body may specify a pre-change position and a post-change position of the target point based on the movement start point and the movement end point of the movement operation, respectively, and may perform at least two kinds of geometric transformations on the initial fusion image based on the specified pre-change position and post-change position to obtain the target fusion image.
Specifically, the pre-change position and the post-change position of the target point may be preset in correspondence with at least two types of geometric transformation methods for the initial fusion image, and the execution body may perform at least two types of geometric transformation on the initial fusion image based on the correspondence to obtain the target fusion image.
Here, the correspondence relationship may be preset in various ways.
As an example, the at least two geometric transformations corresponding to the target point include rotation and scaling. Further, for zooming, the correspondence of the distance between the pre-change position and the post-change position of the target point and the zoom ratio may be set in advance. For the rotation, a correspondence relationship between the direction of the rotation and the moving direction from the pre-change position to the post-change position of the target point may be set in advance. For example, the direction of movement may be set to the right corresponding to a clockwise rotation; the direction of movement to the left corresponds to a counter-clockwise rotation. And, a reference line (for example, a horizontal line) may be set in advance, and a correspondence relationship between an angle of the reference line and a connecting line of the pre-change position and the post-change position of the target point and the angle of rotation may be set in advance.
And step 205, fusing the target fusion image and the target image to generate a fused image.
In this embodiment, based on the target fusion image obtained in step 204 and the target image obtained in step 201, the execution subject may fuse the target fusion image and the target image to generate a fused image.
In practice, the target fusion image may be displayed at the target fusion position of the target image (for example, the target position determined based on the target image and the initial fusion image in step 202) before the image fusion is performed to generate the fused image, and in this case, the execution subject may directly fuse the target fusion image and the target image to generate the fused image. Alternatively, if the target fusion image is not displayed at the target fusion position of the target image before the image fusion is performed to generate the fused image, the execution body may first move the target fusion image so that the target fusion image is displayed at the target fusion position of the target image, and then fuse the moved target fusion image and the target image to generate the fused image. The target fusion position is used for indicating the position of the target fusion image on the target image when the target fusion image and the target image are fused.
Specifically, the execution agent may move the target fusion image in various ways so that the target fusion image is displayed at the target fusion position of the target image, and then fuse the moved target fusion image and the target image to generate the fused image.
In some optional implementation manners of this embodiment, the executing entity may fuse the moved target fusion image and the target image in response to detecting a moving operation of the user on the target fusion image, and generate a fused image. It is to be understood that, here, the position indicated by the movement end point corresponding to the movement operation is the target fusion position.
In some optional implementations of the embodiment, the fusion images in the fusion image set include preset fusion points, and the fusion points are points used for fusing the fusion images and the target images; and the executing body can generate the fused image through the following steps: first, the execution subject may recognize the target image and obtain the fusion point included in the target image, where the fusion point included in the target image is a point to be overlapped with the fusion point included in the fusion image. The position indicated by the fusion point included in the target image is the target fusion position. Then, the executing agent may add the target fusion image to the target image so that the fusion point included in the target fusion image overlaps the fusion point included in the target image, thereby generating the fused image.
It should be noted that the image fusion technique is a well-known technique that is currently widely researched and applied, and is not described herein again.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for processing an image according to the present embodiment. In the application scenario of fig. 3, the terminal device may first acquire and display a target image 301 and a preset fusion image set 302, where the fusion image set 302 includes a fusion image 3021 and a fusion image 3022 (as indicated by reference numeral 303). Then, in response to detecting that the user has performed a click operation on the image for fusion 3022, the terminal apparatus may determine the image for fusion 3022 as the image for initial fusion 304 for fusion with the target image 301, and further, the terminal apparatus may determine a target point 305 corresponding to the image for initial fusion 304 for performing at least two kinds of geometric transformations (e.g., rotation and zoom) on the image for initial fusion at the same time, and display (as indicated by reference numeral 306). Then, in response to detecting a movement operation of the user with respect to the target point 305, the terminal device may perform at least two kinds of geometric transformations on the initial image for fusion 304 based on the movement operation, and obtain a target image for fusion 307 (as indicated by reference numeral 308). Finally, the terminal device may fuse the target fusion image 307 and the target image 301 to generate a fused image 309 (as indicated by reference numeral 310).
The method provided by the above embodiment of the present application obtains and displays the target image and the preset fusion image set, then, a fusion image for fusion with the target image is determined from the fusion image set as an initial fusion image, a target point corresponding to the initial fusion image is determined and displayed, wherein the target point is used for simultaneously performing at least two kinds of geometric transformation on the image for initial fusion, and further responding to the movement operation of the user for the target point, based on the movement operation, at least two kinds of geometric transformation are carried out on the image for initial fusion to obtain an image for target fusion, finally the image for target fusion and the target image are fused to generate a fused image, therefore, at least two kinds of geometric transformation can be simultaneously carried out on the image for fusion based on the target point of the image for fusion, and the efficiency and diversity of image processing are improved.
With further reference to FIG. 4, a flow 400 of yet another embodiment of a method for processing an image is shown. The flow 400 of the method for processing an image comprises the steps of:
step 401, acquiring and displaying a target image and a preset image set for fusion.
In this embodiment, an executing subject (for example, the terminal devices 101, 102, 103 shown in fig. 1) of the method for processing images may acquire the target image and the preset fusion image set by a wired connection manner or a wireless connection manner, and display the target image and the preset fusion image set. Wherein the target image is an image to be processed. The fusion image in the fusion image set may be an image that is set in advance by a technician and is to be fused with the target image. The fusion image set may include images for dynamic fusion. The image for dynamic fusion is an image for fusion having a dynamic effect. In particular, a dynamic effect may be created when a particular set of static images is switched at a specified frequency.
Step 402, selecting a dynamic fusion image from the fusion image set as an initial fusion image.
In this embodiment, based on the fusion image set obtained in step 401, the execution subject may select a dynamic fusion image from the fusion image set as an initial fusion image.
Specifically, in response to determining that the fusion image set includes one image for dynamic fusion, the executing body may directly select the image for dynamic fusion as an image for initial fusion; alternatively, in response to determining that the set of images for fusion includes at least two images for dynamic fusion, the execution subject may select an image for dynamic fusion as an initial image for fusion from the at least two images for dynamic fusion in various ways. For example, the execution subject may select a dynamic fusion image from at least two dynamic fusion images as an initial fusion image in a random selection manner; alternatively, the execution subject may determine, as the initial fusion image, the dynamic fusion image selected by the user in response to detection of a selection operation of the user for the dynamic fusion image in the set of fusion images.
Step 403, determining a target point corresponding to the image for initial fusion and displaying the target point.
In this embodiment, based on the initial fusion image obtained in step 402, the execution subject can specify the target point and the display corresponding to the initial fusion image. Wherein the target points are used for simultaneously performing at least two geometrical transformations on the image for initial fusion.
And step 404, in response to the detection of the movement operation of the user for the target point, performing at least two kinds of geometric transformation on the initial image for fusion based on the movement operation to obtain the target image for fusion.
In this embodiment, the execution subject may perform at least two kinds of geometric transformations on the initial image for fusion based on the movement operation in response to detection of the movement operation of the user with respect to the target point, to obtain the target image for fusion. The target fusion image is an image which is transformed and is used for fusing with the target image.
And 405, fusing the target fusion image and the target image to generate a fused image.
In this embodiment, based on the target fusion image obtained in step 404 and the target image obtained in step 401, the execution subject may fuse the target fusion image and the target image to generate a fused image.
The steps 401, 403, 404 and 405 are respectively consistent with the steps 201, 203, 204 and 205 in the foregoing embodiment, and the above description for the steps 201, 203, 204 and 205 also applies to the steps 401, 403, 404 and 405, which is not repeated herein.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for processing images in the present embodiment highlights a step of selecting an image for dynamic fusion from images for fusion as an image for initial fusion. Compared with a static image for fusion, the dynamic image for fusion can be more vivid and vivid, so that the scheme provided by the embodiment can improve the display effect of the fused image based on the dynamic image for fusion, and further improve the diversity of image processing.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present application provides an embodiment of an apparatus for processing an image, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for processing an image of the present embodiment includes: an image acquisition unit 501, a first determination unit 502, a second determination unit 503, an image transformation unit 504, and an image fusion unit 505. The image acquiring unit 501 is configured to acquire and display a target image and a preset fusion image set; the first determination unit 502 is configured to determine an image for fusion used for fusion with a target image from a set of images for fusion as an image for initial fusion; the second determining unit 503 is configured to determine a target point corresponding to the initial image for fusion and display, where the target point is used for performing at least two kinds of geometric transformations on the initial image for fusion at the same time; the image transformation unit 504 is configured to perform at least two kinds of geometric transformations on the initial image for fusion based on a movement operation in response to detection of the movement operation of the user for the target point, and obtain a target image for fusion; the image fusion unit 505 is configured to fuse the target fusion image and the target image, and generate a fused image.
In this embodiment, the image acquiring unit 501 of the apparatus 500 for processing an image may acquire the target image and the preset set of images for fusion by a wired connection manner or a wireless connection manner, and display the target image and the preset set of images for fusion. Wherein the target image is an image to be processed. The fusion image in the fusion image set may be an image that is set in advance by a technician and is to be fused with the target image. Specifically, the fusion image may be an image (for example, a hat image, a glasses image, or the like) for decorating a target image (for example, a face image).
In this embodiment, based on the set of images for fusion obtained by the image acquisition unit 501, the first determination unit 502 may determine an image for fusion used for fusion with the target image from the set of images for fusion as an image for initial fusion.
In this embodiment, the second determining unit 503 may determine a target point and a display corresponding to the initial fusion image based on the initial fusion image obtained by the first determining unit 502. Wherein the target points are used for simultaneously performing at least two geometrical transformations on the image for initial fusion.
In this embodiment, the image transformation unit 504 may perform at least two kinds of geometric transformations on the initial image for fusion based on the movement operation in response to detection of the movement operation of the user with respect to the target point, to obtain the target image for fusion. The target fusion image is an image which is transformed and is used for fusing with the target image.
In this embodiment, based on the target fusion image obtained by the image conversion unit 504 and the target image obtained by the image acquisition unit 501, the image fusion unit 505 may fuse the target fusion image and the target image to generate a fused image.
In some optional implementations of this embodiment, the fusion image set includes images for dynamic fusion; and the first determining unit 502 may be further configured to: and selecting the image for dynamic fusion from the image set for fusion as an image for initial fusion.
In some optional implementations of this embodiment, the first determining unit 502 may be further configured to: in response to detecting a user's selection operation for an image for fusion in the set of images for fusion, the image for fusion selected by the user is determined as an image for initial fusion.
In some optional implementations of the present embodiment, the image fusion unit 505 may be further configured to: and in response to the detection of the movement operation of the user on the target fusion image, fusing the moved target fusion image and the target image to generate a fused image.
In some optional implementations of the embodiment, the fusion images in the fusion image set include preset fusion points; and the image fusion unit 505 may include: an image recognition module (not shown in the figure) configured to recognize the target image and obtain a fusion point included in the target image; and an image adding module (not shown in the figure) configured to add the target fusion image to the target image so that the fusion point included in the target fusion image coincides with the fusion point included in the target image, and generate a fused image.
It will be understood that the elements described in the apparatus 500 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 500 and the units included therein, and are not described herein again.
The above-described embodiment of the present application provides the apparatus 500 by acquiring and displaying the target image and the preset fusion image set, then, a fusion image for fusion with the target image is determined from the fusion image set as an initial fusion image, a target point corresponding to the initial fusion image is determined and displayed, wherein the target point is used for simultaneously performing at least two kinds of geometric transformation on the image for initial fusion, and further responding to the movement operation of the user for the target point, based on the movement operation, at least two kinds of geometric transformation are carried out on the image for initial fusion to obtain an image for target fusion, finally the image for target fusion and the target image are fused to generate a fused image, therefore, rotation processing and zooming processing can be simultaneously carried out on the image for fusion based on the target point of the image for fusion, and the efficiency and diversity of image processing are improved.
Referring now to fig. 6, a block diagram of a terminal device (e.g., terminal device of fig. 1) 600 suitable for implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The terminal device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the terminal device 600 may include a processing means (e.g., a central processing unit, a graphic processor, etc.) 601 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the terminal apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the terminal device 600 to perform wireless or wired communication with other devices to exchange data. While fig. 6 illustrates a terminal apparatus 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be included in the terminal device; or may exist separately without being assembled into the terminal device. The computer readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: acquiring and displaying a target image and a preset fusion image set; determining a fusion image for fusion with a target image from the fusion image set as an initial fusion image; determining a target point corresponding to the image for initial fusion and displaying, wherein the target point is used for simultaneously carrying out at least two kinds of geometric transformation on the image for initial fusion; in response to the detection of the movement operation of the user for the target point, performing at least two kinds of geometric transformation on the initial image for fusion based on the movement operation to obtain a target image for fusion; and fusing the target fusion image and the target image to generate a fused image.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Here, the name of the unit does not constitute a limitation of the unit itself in some cases, and for example, the image acquisition unit may also be described as "a unit that acquires a target image".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Claims (10)
1. A method for processing an image, comprising:
acquiring and displaying a target image and a preset fusion image set;
determining a fusion image for fusion with the target image from the fusion image set as an initial fusion image;
determining a target point corresponding to the initial image for fusion and displaying the target point, wherein the target point is used for simultaneously carrying out at least two kinds of geometric transformation on the initial image for fusion;
in response to detecting a movement operation of a user for the target point, performing the at least two kinds of geometric transformation on the initial image for fusion based on the movement operation to obtain a target image for fusion;
and fusing the target image for fusing and the target image to generate a fused image.
2. The method of claim 1, wherein the set of images for fusion comprises images for dynamic fusion; and
the method for determining a fusion image for fusion with the target image from the fusion image set as an initial fusion image includes:
and selecting the images for dynamic fusion from the image set for fusion as initial images for fusion.
3. The method according to claim 1, wherein the determining, from the set of images for fusion, an image for fusion with the target image as an initial image for fusion comprises:
in response to detecting a user's selection operation for an image for fusion in the set of images for fusion, determining the image for fusion selected by the user as an image for initial fusion.
4. The method of claim 1, wherein the fusing the target fusion image and the target image to generate a fused image comprises:
and in response to the detection of the movement operation of the user on the target fusion image, fusing the moved target fusion image and the target image to generate a fused image.
5. The method according to one of claims 1 to 4, wherein the fusion images in the set of fusion images include preset fusion points; and
the fusing the target image and the target image for target fusion to generate a fused image includes:
identifying the target image to obtain fusion points included in the target image;
and adding the target fusion image to the target image so that the fusion point included in the target fusion image overlaps the fusion point included in the target image, thereby generating a fused image.
6. An apparatus for processing an image, comprising:
an image acquisition unit configured to acquire and display a target image and a preset fusion image set;
a first determination unit configured to determine, as an initial fusion image, a fusion image for fusion with the target image from the set of fusion images;
a second determining unit, configured to determine a target point corresponding to the initial image for fusion and display, where the target point is used to perform at least two kinds of geometric transformations on the initial image for fusion at the same time;
an image transform unit configured to perform the at least two kinds of geometric transforms on the initial image for fusion based on a movement operation of a user for the target point in response to detection of the movement operation, to obtain a target image for fusion;
and an image fusion unit configured to fuse the target fusion image and the target image and generate a fused image.
7. The apparatus of claim 6, wherein the set of images for fusion comprises images for dynamic fusion; and
the first determination unit is further configured to:
and selecting the images for dynamic fusion from the image set for fusion as initial images for fusion.
8. The apparatus according to one of claims 6-7, wherein the fusion images in the set of fusion images include preset fusion points; and
the image fusion unit includes:
the image identification module is configured to identify the target image and obtain fusion points included in the target image;
an image adding module configured to add the target fusion image to the target image so that a fusion point included in the target fusion image coincides with a fusion point included in the target image, and generate a fused image.
9. A terminal, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
a display screen configured to display an image;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811635344.4A CN110619615A (en) | 2018-12-29 | 2018-12-29 | Method and apparatus for processing image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811635344.4A CN110619615A (en) | 2018-12-29 | 2018-12-29 | Method and apparatus for processing image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110619615A true CN110619615A (en) | 2019-12-27 |
Family
ID=68921018
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811635344.4A Pending CN110619615A (en) | 2018-12-29 | 2018-12-29 | Method and apparatus for processing image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110619615A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022057576A1 (en) * | 2020-09-17 | 2022-03-24 | 北京字节跳动网络技术有限公司 | Facial image display method and apparatus, and electronic device and storage medium |
WO2023185455A1 (en) * | 2022-03-28 | 2023-10-05 | 北京字跳网络技术有限公司 | Image processing method and apparatus, electronic device, and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107340964A (en) * | 2017-06-02 | 2017-11-10 | 武汉斗鱼网络科技有限公司 | The animation effect implementation method and device of a kind of view |
CN107483892A (en) * | 2017-09-08 | 2017-12-15 | 北京奇虎科技有限公司 | Video data real-time processing method and device, computing device |
CN109035373A (en) * | 2018-06-28 | 2018-12-18 | 北京市商汤科技开发有限公司 | The generation of three-dimensional special efficacy program file packet and three-dimensional special efficacy generation method and device |
-
2018
- 2018-12-29 CN CN201811635344.4A patent/CN110619615A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107340964A (en) * | 2017-06-02 | 2017-11-10 | 武汉斗鱼网络科技有限公司 | The animation effect implementation method and device of a kind of view |
CN107483892A (en) * | 2017-09-08 | 2017-12-15 | 北京奇虎科技有限公司 | Video data real-time processing method and device, computing device |
CN109035373A (en) * | 2018-06-28 | 2018-12-18 | 北京市商汤科技开发有限公司 | The generation of three-dimensional special efficacy program file packet and three-dimensional special efficacy generation method and device |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022057576A1 (en) * | 2020-09-17 | 2022-03-24 | 北京字节跳动网络技术有限公司 | Facial image display method and apparatus, and electronic device and storage medium |
US11935176B2 (en) | 2020-09-17 | 2024-03-19 | Beijing Bytedance Network Technology Co., Ltd. | Face image displaying method and apparatus, electronic device, and storage medium |
WO2023185455A1 (en) * | 2022-03-28 | 2023-10-05 | 北京字跳网络技术有限公司 | Image processing method and apparatus, electronic device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106846497B (en) | Method and device for presenting three-dimensional map applied to terminal | |
CN109981787B (en) | Method and device for displaying information | |
US20220094758A1 (en) | Method and apparatus for publishing video synchronously, electronic device, and readable storage medium | |
CN111399956A (en) | Content display method and device applied to display equipment and electronic equipment | |
CN110825286B (en) | Image processing method and device and electronic equipment | |
CN111459364B (en) | Icon updating method and device and electronic equipment | |
CN110794962A (en) | Information fusion method, device, terminal and storage medium | |
CN111291244A (en) | House resource information display method, device, terminal and storage medium | |
CN110795196A (en) | Window display method, device, terminal and storage medium | |
CN110456957B (en) | Display interaction method, device, equipment and storage medium | |
CN111652675A (en) | Display method and device and electronic equipment | |
CN110619615A (en) | Method and apparatus for processing image | |
CN114417782A (en) | Display method and device and electronic equipment | |
CN113129360B (en) | Method and device for positioning object in video, readable medium and electronic equipment | |
US11750876B2 (en) | Method and apparatus for determining object adding mode, electronic device and medium | |
CN112764629B (en) | Augmented reality interface display method, device, equipment and computer readable medium | |
CN116527993A (en) | Video processing method, apparatus, electronic device, storage medium and program product | |
CN109600558B (en) | Method and apparatus for generating information | |
CN110620916A (en) | Method and apparatus for processing image | |
CN111460334A (en) | Information display method and device and electronic equipment | |
CN110620805B (en) | Method and apparatus for generating information | |
CN110110695B (en) | Method and apparatus for generating information | |
CN110119457B (en) | Method and apparatus for generating information | |
CN112306339B (en) | Method and apparatus for displaying image | |
CN112346630B (en) | State determination method, device, equipment and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |