CN110992256A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN110992256A
CN110992256A CN201911299168.6A CN201911299168A CN110992256A CN 110992256 A CN110992256 A CN 110992256A CN 201911299168 A CN201911299168 A CN 201911299168A CN 110992256 A CN110992256 A CN 110992256A
Authority
CN
China
Prior art keywords
image
identification information
map
terminal
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911299168.6A
Other languages
Chinese (zh)
Other versions
CN110992256B (en
Inventor
罗飞虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911299168.6A priority Critical patent/CN110992256B/en
Publication of CN110992256A publication Critical patent/CN110992256A/en
Application granted granted Critical
Publication of CN110992256B publication Critical patent/CN110992256B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • G06F16/9554Retrieval from the web using information identifiers, e.g. uniform resource locators [URL] by using bar codes

Abstract

The invention provides an image processing method, an image processing device, image processing equipment and a storage medium, wherein the method comprises the following steps: receiving a first image and first template identification information sent by a first terminal, fusing the first image and the first template identification information to obtain a first fused image, generating first fused image identification information of the first fused image and feeding the first fused image identification information back to the first terminal, so that the first terminal shares first co-shooting request information carrying the first fused image identification information to a second terminal; enabling the second terminal to acquire the first map based on the first map-fusing identification information; receiving a second image and second template identification information sent by a second terminal; fusing the two images to obtain a second fused image; sending a second fused image to a second terminal so that the second terminal synthesizes the first fused image and the second fused image to obtain a target synthetic image; and sharing the image synthesis information to the first terminal so that the first terminal obtains and displays the target synthesis image based on the image synthesis information. The invention realizes the co-shooting effect of the multiple users on the same frame of the fused graph and improves the social interaction and the interestingness of the fused graph.

Description

Image processing method, device, equipment and storage medium
Technical Field
The invention belongs to the technical field of internet, and particularly relates to an image processing method, device, equipment and storage medium.
Background
In the field of human face image processing, with the advent of artificial intelligence technology, functions of face beautification, charting, hairstyle changing, face changing and the like in various photographing applications or image processing applications are popular with users. The face changing and face image fusion is mainly characterized in that face fusion is carried out through a cloud server user photo and a material photo, for example, skin color adjustment is carried out on the user photo, so that a face area of the user photo is more naturally fused into a face area of the material photo, the fused image has the face appearance characteristics in the user photo and the character images (such as military image, children photo, ancient image and the like) characteristics in the material photo, the natural and false effects are provided for the user, the rich entertainment requirements of the user are met, and the interestingness of image processing application is improved.
In the prior art, a single user photo and a single material image are generally fused to obtain a fixed single fused image, but the single fused image is boring and low in interestingness, social interaction is lacked, and the co-shooting effect of a plurality of user co-frame fused images cannot be achieved.
Disclosure of Invention
In order to achieve the co-shooting effect of multiple users on the same frame of the fused image and improve the social interaction and interest of the fused image, the invention provides an image processing method, an image processing device, image processing equipment and a storage medium.
In one aspect, the present invention provides an image processing method, including:
receiving a first image and first template identification information sent by a first terminal, wherein the first template identification information corresponds to a first image template selected by the first terminal from a multi-image fusion template, and the multi-image fusion template comprises a plurality of image templates;
fusing the first image and the first image template to obtain a first fused image;
generating first record information corresponding to the first terminal and the first map;
encrypting the first recording information to obtain first map-melting identification information;
sending the first map fusing identification information and the first map fusing to the first terminal so that the first terminal shares first co-shooting request information to a second terminal, wherein the first co-shooting request information carries the first map fusing identification information; and enabling the second terminal to acquire the first map based on the first map-melting identification information;
receiving a second image and second template identification information sent by the second terminal, wherein the second template identification information corresponds to a second image template selected by the second terminal from the multi-image fusion template;
fusing the second image and the second image template to obtain a second fused image;
generating second recording information corresponding to the second terminal and the second fusion map;
encrypting the second recording information to obtain second map-melting identification information;
sending the second fused image and the second fused image identification information to the second terminal, so that the second terminal synthesizes the first fused image and the second fused image to obtain a target synthesized image; and sharing image synthesis information carrying the second map-melting identification information and the first map-melting identification information to the first terminal, so that the first terminal obtains the target synthetic image based on the image synthesis information and displays the target synthetic image.
In another aspect, the present invention provides an image processing method, including:
sending a first image and first template identification information to a fused image server, wherein the first template identification information corresponds to a first image template selected by a local terminal from a multi-image fusion template comprising a plurality of image templates, so that the fused image server fuses the first image and the first image template to obtain a first fused image; generating first record information corresponding to the first terminal and the first map; encrypting the first recording information to obtain first map-melting identification information;
receiving the first map-melting identification information and the first map-melting sent by the map-melting server;
sharing first co-shooting request information to a second terminal, wherein the first co-shooting request information carries the first map fusing identification information, so that the second terminal obtains the first map fusing based on the first map fusing identification information; sending a second image and second template identification information to the image fusing server, wherein the second template identification information corresponds to a second image template selected by the second terminal from the multi-image fusion template, so that the image fusing server fuses the second image and the second image template to obtain a second fused image; generating second recording information corresponding to the second terminal and the second fusion map; encrypting the second recording information to obtain second map-melting identification information; sending the second fused image and the second fused image identification information to the second terminal so that the second terminal synthesizes the first fused image and the second fused image to obtain a target synthesized image;
receiving image synthesis information shared by the second terminal, wherein the image synthesis information carries the second map-fusing identification information and the first map-fusing identification information;
and acquiring the target synthetic image based on the second map-melting identification information and the first map-melting identification information, and displaying the target synthetic image.
In another aspect, the present invention provides an image processing method, including:
receiving first close-up request information shared by a first terminal, wherein the first close-up request information carries first map fusing identification information; the first fused image identification information is obtained by encrypting first record information generated based on the first terminal and a first fused image by a fused image server, the first fused image is obtained by fusing a first image and a first image template selected from multi-image fused templates by the first terminal through the fused image server, the first template identification information corresponding to the first image template and the first image are sent to the fused image server through the first terminal, and the multi-image fused template comprises a plurality of image templates;
acquiring the first map fusion based on the first map fusion identification information;
sending a second image and second template identification information to the image fusing server, wherein the second template identification information corresponds to a second image template selected by a local terminal from the multi-image fusion template, so that the image fusing server fuses the second image and the second image template to obtain a second fused image; generating second recording information corresponding to the second terminal and the second fusion map; encrypting the second recording information to obtain second map-melting identification information;
receiving the second map and the second map identification information sent by the map fusing server;
synthesizing the first fusion image and the second fusion image to obtain a target synthesized image;
sharing image synthesis information to the first terminal, wherein the image synthesis information carries the second map-melting identification information and the first map-melting identification information, so that the first terminal obtains the target synthesis image based on the second map-melting identification information and the first map-melting identification information, and displays the target synthesis image.
In another aspect, the present invention provides an image processing apparatus, comprising:
the system comprises a first receiving module, a second receiving module and a processing module, wherein the first receiving module is used for receiving a first image and first template identification information sent by a first terminal, the first template identification information corresponds to a first image template selected by the first terminal from a multi-image fusion template, and the multi-image fusion template comprises a plurality of image templates;
the first fusion module is used for fusing the first image and the first image template to obtain a first fusion image;
the first generating module is used for generating first record information corresponding to the first terminal and the first map;
the first encryption module is used for encrypting the first recording information to obtain first map-melting identification information;
a first sending module, configured to send the first fused map identification information and the first fused map to the first terminal, so that the first terminal shares first co-shooting request information to a second terminal, where the first co-shooting request information carries the first fused map identification information; and enabling the second terminal to acquire the first map based on the first map-melting identification information;
a second receiving module, configured to receive a second image and second template identification information sent by the second terminal, where the second template identification information corresponds to a second image template selected by the second terminal from the multi-image fusion template;
the second fusion module is used for fusing the second image and the second image template to obtain a second fusion image;
the second generating module is used for generating second recording information corresponding to the second terminal and the second map;
the second encryption module is used for encrypting the second recording information to obtain second map-melting identification information;
a second sending module, configured to send the second merged image and the second merged image identification information to the second terminal, so that the second terminal synthesizes the first merged image and the second merged image to obtain a target synthesized image; and sharing image synthesis information carrying the second map-melting identification information and the first map-melting identification information to the first terminal, so that the first terminal obtains the target synthetic image based on the image synthesis information and displays the target synthetic image.
In another aspect, the present invention provides an image processing apparatus, comprising:
a third sending module, configured to send a first image and first template identification information to a map fusing server, where the first template identification information corresponds to a first image template selected by a local terminal from a multi-image fusing template including multiple image templates, so that the map fusing server fuses the first image and the first image template to obtain a first fused map; generating first record information corresponding to the first terminal and the first map; encrypting the first recording information to obtain first map-melting identification information;
a third receiving module, configured to receive the first map-fusing identification information and the first map-fusing sent by the map-fusing server;
the first sharing module is configured to share first close-up request information to a second terminal, where the first close-up request information carries the first map-fusing identification information, so that the second terminal obtains the first map-fusing based on the first map-fusing identification information; sending a second image and second template identification information to the image fusing server, wherein the second template identification information corresponds to a second image template selected by the second terminal from the multi-image fusion template, so that the image fusing server fuses the second image and the second image template to obtain a second fused image; generating second recording information corresponding to the second terminal and the second fusion map; encrypting the second recording information to obtain second map-melting identification information; sending the second fused image and the second fused image identification information to the second terminal so that the second terminal synthesizes the first fused image and the second fused image to obtain a target synthesized image;
the image synthesis information receiving module is used for receiving image synthesis information shared by the second terminal, wherein the image synthesis information carries the second map fusion identification information and the first map fusion identification information;
and the display module is used for acquiring the target synthetic image based on the second map-melting identification information and the first map-melting identification information and displaying the target synthetic image.
In another aspect, the present invention provides an image processing apparatus, comprising:
the fourth receiving module is used for receiving first close-up request information shared by a first terminal, wherein the first close-up request information carries first map fusing identification information; the first fused image identification information is obtained by encrypting first record information generated based on the first terminal and a first fused image by a fused image server, the first fused image is obtained by fusing a first image and a first image template selected from multi-image fused templates by the first terminal through the fused image server, the first template identification information corresponding to the first image template and the first image are sent to the fused image server through the first terminal, and the multi-image fused template comprises a plurality of image templates;
the acquisition module is used for acquiring the first map fusion based on the first map fusion identification information;
a fourth sending module, configured to send a second image and second template identification information to the fused image server, where the second template identification information corresponds to a second image template selected by the local terminal from the multi-image fused templates, so that the fused image server fuses the second image and the second image template to obtain a second fused image; generating second recording information corresponding to the second terminal and the second fusion map; encrypting the second recording information to obtain second map-melting identification information;
a fifth receiving module, configured to receive the second map fusion and the second map fusion identification information sent by the map fusion server;
the synthesis module is used for synthesizing the first fusion image and the second fusion image to obtain a target synthesis image;
the second sharing module is configured to share image synthesis information with the first terminal, where the image synthesis information carries the second mapping identification information and the first mapping identification information, so that the first terminal obtains the target synthesis image based on the second mapping identification information and the first mapping identification information, and displays the target synthesis image.
In another aspect, the present invention provides an apparatus, comprising: a processor and a memory, the memory having stored therein at least one instruction or at least one program, the at least one instruction or the at least one program being loaded and executed by the processor to implement the image processing method as described above.
In another aspect, the present invention provides a computer-readable storage medium, in which at least one instruction or at least one program is stored, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the image processing method as described above.
The invention provides an image processing method, an image processing device, an image processing apparatus and a storage medium, wherein a first terminal user uploads a first image (such as a face image) and first image template information to perform face fusion to obtain a first fusion image, generates first co-shooting request information (such as a parameter link or a two-dimensional code) with unique first fusion image identification information to perform social sharing, obtains the first fusion image after a second terminal user asks for the shared link or the two-dimensional code, simultaneously selects a second image template, uploads the second image to participate in a co-shooting request initiated by a first terminal, so as to obtain a co-shooting effect image (namely a target synthesis image) of the co-shooting fusion image with the first terminal user and share image synthesis information corresponding to the target image, and finally, a plurality of pairs of social users (such as friend users) can share the co-shooting effect image with the frame and perform social diffusion propagation, the social interaction with the map-fusing of the social users is increased, and the interest of the map-fusing and the user experience are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of an image processing method according to an embodiment of the present invention.
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a multi-image fusion template and an image template provided in an embodiment of the present invention in a scene.
Fig. 4 is a flowchart illustrating another image processing method according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a first fusion map obtained by fusing a first image and a first image template and a synthesized first fusion map according to an embodiment of the present invention.
Fig. 6 is an effect diagram of a first fused graph viewed by a first end user according to an embodiment of the present invention.
Fig. 7 is a schematic diagram illustrating image template selection displayed on a display interface of a second terminal according to an embodiment of the present invention.
Fig. 8 is a second fusion image obtained by fusing the second image and the second image template, and a target composite image obtained by synthesizing the first fusion image and the second fusion image according to the embodiment of the present invention.
Fig. 9 is a flowchart illustrating another image processing method according to an embodiment of the present invention.
Fig. 10 is a flowchart illustrating another image processing method according to an embodiment of the present invention.
Fig. 11 is a flowchart illustrating another image processing method according to an embodiment of the present invention.
Fig. 12 is a flowchart illustrating another image processing method according to an embodiment of the present invention.
Fig. 13 is an alternative structure diagram of the blockchain system according to the embodiment of the present invention.
Fig. 14 is an alternative schematic diagram of a block structure according to an embodiment of the present invention.
Fig. 15 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
Fig. 16 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present invention.
Fig. 17 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present invention.
Fig. 18 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
Cloud technology refers to a hosting technology for unifying serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
Specifically, the embodiment of the present invention relates to cloud computing (cloud computing) in cloud technology, which is a computing mode and distributes computing tasks on a resource pool formed by a large number of computers, so that various application systems can obtain computing power, storage space and information services as required. The network that provides the resources is referred to as the "cloud". Resources in the "cloud" appear to the user as being infinitely expandable and available at any time, available on demand, expandable at any time, and paid for on-demand. As an underlying capability provider of cloud computing, a cloud computing resource pool (i.e., a cloud platform) is established, and multiple types of virtual resources are deployed in the resource pool and are selectively used by external customers. The cloud computing resource pool mainly comprises: computing devices (which are virtualized machines, including operating systems), storage devices, and network devices.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. AI is a comprehensive subject, and relates to the field extensively, and the technique of existing hardware level also has the technique of software level. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further, it refers to using a camera and a Computer to replace human eyes to perform machine Vision such as identifying, tracking and measuring a target, and further performing image processing, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitting to an instrument to detect. The CV generally includes technologies such as image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning, map construction, and the like, and also includes common biometric technologies such as face recognition, fingerprint recognition, and the like.
Specifically, the scheme provided by the embodiment of the invention relates to the technologies of image processing, face recognition and the like in CV.
Specifically, the technical solutions provided by the embodiments of the present invention are illustrated by the following embodiments.
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a schematic diagram of an implementation environment of an image processing method according to an embodiment of the present invention. As shown in fig. 1, the implementation environment may include at least a first terminal 01, a map-fusing server 02, and a second terminal 03, where the first terminal 01 and the second terminal 03 may establish a direct or indirect connection with the map-fusing server 02 through a wired or wireless manner, so as to implement data transmission with the map-fusing server 02 through the network. For example, the first terminal 01 may send the first image and the first template identification information to the comic server 02 through the network, the second terminal 03 may send the second image and the second template identification information to the comic server 02 through the network, and the comic server 02 may return the merged comic to the first terminal 01 and the second terminal 03 through the network.
Specifically, the map-fusing server 02 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like.
Specifically, the first terminal 01 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like.
Specifically, the second terminal 03 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like.
It should be noted that fig. 1 is only an example.
Fig. 2 is a flow chart of an image processing method provided by an embodiment of the present invention, and the present specification provides the method operation steps as described in the embodiment or the flow chart, but more or less operation steps can be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In practice, the system or server product may be implemented in a sequential or parallel manner (e.g., parallel processor or multi-threaded environment) according to the embodiments or methods shown in the figures. Specifically, as shown in fig. 2, the method may include:
s101, a first terminal sends a first image and first template identification information to a fused image server, wherein the first template identification information corresponds to a first image template selected by the first terminal from a plurality of image fused templates, and the plurality of image fused templates comprise a plurality of image templates.
In the embodiment of the present invention, the first terminal user may log in the display interface of the image fusion application by accessing a link or a two-dimensional code picture corresponding to the image fusion application, where the display interface may include at least one multi-image fusion template, each multi-image fusion template may include at least two identical or different image templates, and the image templates include, but are not limited to, military outfit image images, child image images, antique image images, cartoon image images, game image images, and the like.
As shown in fig. 3, a schematic diagram of a multi-image fusion template and an image template in a scene, as shown in fig. 3A, a display interface in the scene includes four multi-image fusion templates, specifically: the image template of the multi-image fusion template comprises a 'Reversal lattice image template', 'beauty girl warrior image template', 'funeral love family image template' and 'classic head image template', wherein each multi-image fusion template can comprise a left image and a right image which are different or identical.
It should be noted that fig. 3 is only an example, and in different application scenarios, the display interface may include other numbers of multi-image fusion templates, and each multi-image fusion template may include other numbers of different or same avatar images. The embodiment of the present invention is not limited thereto.
In practical application, in the design process of image fusion application, template identification information including, but not limited to, numbers, letters, symbols, combinations thereof, and the like is set for each image template in advance, and the mapping relationship between each image template and the corresponding template identification information is stored in a fusion identification information library in a server, so that a subsequent server can find the corresponding image template from the template identification information selected by a user.
In practical application, a first terminal user can select an interested multi-image fusion template according to own preference, and the description is given by taking the first terminal user to select a 'returning bead lattice image template' as an example:
after the first terminal user clicks the "returning beaded figure image template" in fig. 3A, the display interface jumps to the "returning beaded figure image template" (as shown in fig. 3B), and then the first terminal user can select any one of the left and right human figures in the "returning beaded figure image template" as the first image template according to his/her preference. Assuming that the left figure is selected (the template identification information of the figure is '001'), the display interface jumps to the interface for uploading the image, the first terminal user in the uploading image interface can upload the face image existing in the terminal and can trigger the photographing control to collect the face image, so that the face image is used as the first image, and the first image and the first template identification information are sent to the image fusing server. It should be noted that, when the first terminal user uploads the face image existing in the terminal, the face image may be the face image of the first terminal user, and certainly in some scenes, in order to increase the interest of the image fusion, the face image uploaded by the first terminal user may also be the face image of another user stored in the first terminal.
In a possible embodiment, after the first image template and the first image are selected, the first image may be further encrypted, and the encrypted first image and the first template identification information are sent to the cartographic server. For example, the first image may be converted to base64 through a photo canvas, which is part of the hypertext markup language and allows a scripting language to dynamically render bit images, and base64 is the encoding scheme used to transmit 8-bit byte codes.
And S103, the image fusing server fuses the first image and the first image template to obtain a first fused image.
And S105, the map-fusing server generates first record information corresponding to the first terminal and the first map-fusing.
And S107, encrypting the first recording information by the image-melting server to obtain first image-melting identification information.
In the embodiment of the present invention, after the first image and the first template identification information are received by the blend server, the first image template corresponding to the first template identification information may be obtained from the blend identification information library, for example, the first image template corresponding to "001" is obtained, and the first image template are blended to obtain the first blend as shown in fig. 5A, and a schematic diagram of the first blend generated by the blend server may be as shown in fig. 5B. In addition, the map-melting server may further generate a record (i.e., first record information) of the first map and the user information, and store the record in the map-melting identification information library, and in order to improve the security of information transmission, may further encrypt the first record information to obtain first map-melting identification information, where the first map-melting identification information may be represented by itemid-a. The user information includes, but is not limited to, first terminal user attribute information, first terminal identification information, and the like.
And S109, the map-fusing server sends the first map-fusing identification information and the first map-fusing to the first terminal.
In the embodiment of the present invention, after the map-fusing of the map-fusing server is completed, itemid-a and the first map-fusing may be returned to the first terminal, and at this time, the first terminal user may see the effect map as described in fig. 6.
S1011, the first terminal shares first co-shooting request information to a second terminal, and the first co-shooting request information carries the first map fusing identification information.
In the embodiment of the present invention, after the first terminal user views the effect diagram as shown in fig. 6 on the display interface, the first terminal user may share the first co-shooting request information to the second terminal user to perform social communication of the merged image, where the first co-shooting request information includes but is not limited to: a link carrying itemid-A, a two-dimensional code picture carrying itemid-A, and the like.
In a feasible embodiment, if the first terminal user and the second terminal are in a friend relationship with each other, the first terminal user may select a friend to be shared from a friend list, and directly send the first co-shooting request information to the friend, or share the first co-shooting request information in a friend group, or share the first co-shooting request information in a social circle such as a friend circle, so that the friend responds to the first co-shooting request to generate a co-frame co-shooting image with the first terminal user.
In another feasible embodiment, in order to improve the application universality, the first terminal user and the second terminal user may not have a friend relationship with each other, and if the first terminal user and the second terminal user do not have a friend relationship with each other, the first terminal may establish a close-range communication connection with the second terminal, so that the first terminal user sends the first co-shooting request information to the second terminal user through the close-range communication connection, so that the non-friend second terminal user responds to the first co-shooting request to generate a co-frame co-shooting image with the first terminal user. Wherein the close range communication connection includes, but is not limited to: near Field Communication (NFC), bluetooth, ZigBee, and the like.
And S1013, the second terminal acquires the first map based on the first map-melting identification information.
In the embodiment of the present invention, the second terminal user may obtain itemid-a by clicking a link or scanning a two-dimensional code picture, and read, from the fused map server, fused map synthesis information included in the first record information through the itemid-a, for example, an initiator (i.e., the first terminal user) that initiates a link or a two-dimensional picture, the first fused map, a first fused map picture address, an existing fused map synthesis participant, and the like.
And S1015, the second terminal sends a second image and second template identification information to the image fusion server, wherein the second template identification information corresponds to a second image template selected by the second terminal from the multi-image fusion template.
In the embodiment of the present invention, after the second terminal user clicks the link or scans the two-dimensional code picture, an image template selection diagram as shown in fig. 7 may be displayed on a display interface of the second terminal, where a left character of the effect diagram is an image obtained by fusing the first terminal user and the first image template, and a right character of the effect diagram is an empty template image. At this time, if the second end user wants to participate in the snapshot request sent by the first end user, the second end user may click the right second image template (e.g., 002) in fig. 7, and upload the face image of the second end user in the form of selecting a face image or taking a picture already in the second terminal, so as to send the second image and the second template identification information to the comic server. It should be noted that, when the second terminal user uploads the face image existing in the terminal, the face image may be the face image of the second terminal user, and certainly in some scenes, in order to increase the interest of the image fusion, the face image uploaded by the second terminal user may also be the face image of another user stored in the second terminal.
In practical application, as shown in fig. 7, after the image fusion server fuses the face images of the first end user, the first image template is fused with the face images of the first end user, and the first image template cannot be selected by the second user, so that the second image template may be another image template in the multi-image fusion template except for the first image template.
In practical application, after the second terminal receives the first co-shooting request information sent by the first terminal, the second terminal may also directly select a new multi-image fusion template without responding to the co-shooting request, select a new image template from the new multi-image fusion template, and send the information image template and a corresponding face image to the fusion image server, so as to initiate a new co-shooting request to the first terminal user or other terminal users.
And S1017, the image fusing server fuses the second image and the second image template to obtain a second fused image.
S1019, the map-fusing server generates second recording information corresponding to the second terminal and the second map-fusing.
S10111, the image-melting server encrypts the second recording information to obtain second image-melting identification information.
In this embodiment of the present invention, after the second image and the second template identification information are received by the map-fusing server, the second image template corresponding to the second template identification information may be obtained from the map-fusing identification information base, for example, the second image template corresponding to "002" is obtained, and the second image template are fused to obtain the second fusion map shown in fig. 8A.
In addition, the map-melting server may further generate a record (i.e., second record information) of the second map and the second end-user information, and store the record in the map-melting identification information base, and in order to improve the security of information transmission, may further encrypt the second record information to obtain second map-melting identification information, which may be represented by itemid-B. The second end user information includes, but is not limited to, second end user attribute information, second terminal identification information, and the like.
S10113, the map-fusing server sends the second map-fusing and the second map-fusing identification information to the second terminal.
S10115, the second terminal synthesizes the first fusion image and the second fusion image to obtain a target synthesis image.
In the embodiment of the present invention, after the map fusion of the map fusion server is completed, the second map fusion and itemid-B may be returned to the second terminal, and the second terminal synthesizes the first map fusion and the second map fusion to obtain the target synthesized image, and displays the target synthesized image, as shown in fig. 8B.
In a possible embodiment, if the multi-image fusion template only includes two image templates, after the second terminal synthesizes the first fusion image and the second fusion image to obtain a target synthesized image, the method may further include:
s10117, the second terminal shares image synthesis information to the first terminal, and the image synthesis information carries the second map-melting identification information and the first map-melting identification information.
S10119, the first terminal obtains the target synthetic image based on the second map-melting identification information and the first map-melting identification information, and displays the target synthetic image.
In practical application, if each multi-image template only includes two image templates (for example, the image templates shown in fig. 3B), after the second terminal participates in the close-up request initiated by the first terminal, the image templates in the merged image template are already used up, the second terminal may directly share the image synthesis information carrying the second merged image identification information and the first merged image identification information with the first terminal, and the image synthesis information may be sent to the first terminal in a form of link or two-dimensional code picture, so that the first terminal user views the target synthesized image after clicking the link or recognizing the two-dimensional code. Therefore, the first terminal user and the second terminal user can both watch the same-frame fusion image combination effect images of all other users who combine images with the first terminal user and the second terminal user, the social interaction between the users and the other users is increased, and the interest and the user experience of the fusion image are improved.
In practical application, after the first terminal user views the co-frame fusion image photographic effect images of the first terminal user and the second terminal user, the first terminal user can continue to select a new multi-image fusion template, select a new image template from the new multi-image fusion template, and send the information image template and a corresponding face image to the fusion image server so as to initiate a new co-shooting request to the second terminal user or other terminal users, thereby performing diffusion propagation of the social fusion image.
In another possible embodiment, as shown in fig. 4, if the multi-image fusion template includes at least three image templates, after the second terminal synthesizes the first fusion image and the second fusion image to obtain a target synthesized image, the method may further include:
S10117A, the second terminal shares second snapshot request information to the first terminal or other terminals, wherein the second snapshot request carries the second fuse map identification information and the first fuse map identification information;
S10119A. the first terminal or other terminals respond to the second shooting request information and generate a synthetic image corresponding to the second shooting request information; or, the first terminal or other terminals do not respond to the second close-up request information, and send new images and new template identification information to the image fusion server, where the new template identification information corresponds to a new image template selected by the first terminal or other terminals from new multi-image fusion templates.
In practical application, if the multi-image fusion template includes at least three image templates, it indicates that the second terminal selects the second image template to participate in the co-shooting request initiated by the first terminal, and after the target composite image is obtained, other image templates remain in the multi-image fusion template for selection, at this time, the second terminal may share the second co-shooting request information with the first terminal to invite the first terminal to continue to participate in the co-shooting, specifically:
the second snapshot request information may be a link or a two-dimensional code picture, and when the first terminal receives the second snapshot request information shared by the second terminal, the link or the two-dimensional code picture may be clicked, and if the first terminal user does not want to participate in the snapshot request initiated by the second terminal, the first terminal user may not respond to the request and send a new image and new template identification information to the map fusing server, so as to obtain a snapshot from other users. If the first terminal wants to participate in the close-up request initiated by the second terminal, any image template can be selected from other image templates except the first image template and the second image template, the face image is uploaded, the server performs image fusion and generates a new itemid to generate a new composite image, and the new composite image can be watched by the first terminal user and the second terminal user simultaneously. Then, the first terminal user can continue to share and carry new itemid links or two-dimensional code pictures with the second terminal user or other terminal yoghurts to generate new analysis content, and so on until all the image templates in the multi-image fusion image template are used up, so that each user participating in the co-shooting request can watch the co-frame fusion image co-shooting effect image co-shooting with other users, the diffusion propagation effect of the social fusion image is achieved, the interestingness of the fusion image and the interaction between the users are improved, and the user experience is better.
In addition, the second terminal may also share the second snapshot request information with other terminals except the first terminal, and the specific process is similar to that described above and is not described herein again.
As shown in fig. 9, which is another schematic flow chart of the image processing method according to the embodiment of the present invention, before the first terminal sends the first image and the first template identification information to the image-fusing server, the method may further include:
and judging the address parameter accessed by the first terminal.
Specifically, the determining the address parameter accessed by the first terminal may include:
and the first terminal accesses target information and analyzes the target information.
And if the target information does not carry target map-fusing identification information, the first terminal executes the step of sending the first image and the first template identification information to the map-fusing server.
If the target information carries target map fusing identification information, the first terminal acquires the target fusing information corresponding to the target map fusing identification information from the map fusing identification information base; the map fusing identification information base stores fusion information, map fusing identification information, template identification information, image templates, mapping relations between the fusion information and the map fusing identification information, and mapping relations between the template identification information and the image templates.
And the first terminal judges the type of the target information according to the target fusion information.
If the type of the target information is the close-up request information, the first terminal responds to the close-up request information to generate a synthetic image corresponding to the target information; or, the first terminal does not respond to the close-shot request information, and executes a step of sending the first image and the first template identification information to the image fusion server.
And if the type of the target information is non-close-shot request information, the first terminal executes a step of sending a first image and first template identification information to a fused image server.
In practical application, before the first terminal sends the first image and the first template identification information to the image fusion server, a display interface of the image fusion application needs to be logged in, and the first terminal can log in the image fusion interface in various ways such as clicking a link shared by other terminals, a two-dimensional code or directly opening a website of the image fusion application, however, the logging in way of the first terminal may cause different operations of a first terminal user.
Based on this, before the first terminal sends the first image and the first template identification information to the image-fusing server, the login mode of the first terminal needs to be judged, that is, the address parameter accessed by the first terminal needs to be judged. Firstly, parameters in target information (such as a link, a website, a two-dimensional code picture and the like) accessed by a user can be analyzed, whether the parameters carry target fusion map identification information itemid (such as itemid A, itemid-B and the like) or not is analyzed, if the parameters do not carry the target information, the target information accessed by a first terminal is the website of an image fusion application, and the step of sending a first image and first template identification information to a fusion map server can be directly executed; if the information is carried, it indicates that the target information accessed by the first terminal is information such as co-shooting, image synthesis and the like sent by other terminals, and according to the target map-fusing identification information, corresponding recorded information can be searched from a map-fusing identification information base in a map-fusing server, and information included in the recorded information is analyzed, for example, who generated the target information and participated in the synthesis of the map-fusing, co-shooting information or image synthesis information which has completed the co-shooting, and the like. Whether the first user participates in the fused map synthesis or not can be continuously judged according to the recorded information, if yes, the user can choose not to respond to the co-shooting request, and the step of sending the first image and the first template identification information to the fused map server is directly executed; if the information is not involved in the synthesis of the fused image, whether the information is the close-up image obtaining information sent by the friend or not can be continuously judged, if the information is the close-up image obtaining information sent by the friend, the close-up request can be selected to be responded, the template of the fused image of the friend is added, the image template is selected from the rest image templates and the face image is uploaded, so that the synthesized image corresponding to the target information is generated, the generation process of the synthesized image is similar to that described above, and is not repeated, and if the information is the close-up image obtaining request sent by a non-friend (for example, the image synthesizing information of which the close-up is completed), the step of sending the first image and the first template identification information to the fused image server can be executed, so that the close-up image obtaining request is.
In the following, the image processing method provided by the embodiment of the present invention is described with reference to a fusion server as an execution subject, and as shown in fig. 10, the method may include:
s201, receiving a first image and first template identification information sent by a first terminal, wherein the first template identification information corresponds to a first image template selected by the first terminal from a multi-image fusion template, and the multi-image fusion template comprises a plurality of image templates.
And S203, fusing the first image and the first image template to obtain a first fused image.
S205, generating first record information corresponding to the first terminal and the first fusion graph.
And S207, encrypting the first recording information to obtain first map-melting identification information.
S209, sending the first map fusion identification information and the first map fusion to the first terminal so that the first terminal shares first co-shooting request information to a second terminal, wherein the first co-shooting request information carries the first map fusion identification information; and enabling the second terminal to acquire the first map based on the first map-melting identification information.
And S2011, receiving a second image and second template identification information sent by the second terminal, wherein the second template identification information corresponds to a second image template selected by the second terminal from the multi-image fusion template.
S2013, fusing the second image and the second image template to obtain a second fused image.
And S2015, generating second recording information corresponding to the second terminal and the second fusion map.
S2017, encrypting the second recording information to obtain second map-melting identification information.
S2019, sending the second fused image and the second fused image identification information to the second terminal so that the second terminal can synthesize the first fused image and the second fused image to obtain a target synthesized image; and sharing image synthesis information carrying the second map-melting identification information and the first map-melting identification information to the first terminal, so that the first terminal obtains the target synthetic image based on the image synthesis information and displays the target synthetic image.
In the following, the image processing method according to the embodiment of the present invention is described with a first terminal as an execution subject, and as shown in fig. 11, the method may include:
s301, sending a first image and first template identification information to a fused image server, wherein the first template identification information corresponds to a first image template selected by a local terminal from a multi-image fused template comprising a plurality of image templates, so that the fused image server fuses the first image and the first image template to obtain a first fused image; generating first record information corresponding to the first terminal and the first map; and encrypting the first recording information to obtain first mapping identification information.
And S303, receiving the first map-melting identification information and the first map-melting sent by the map-melting server.
S305, sharing first close-up request information to a second terminal, wherein the first close-up request information carries the first fused map identification information, so that the second terminal obtains the first fused map based on the first fused map identification information; sending a second image and second template identification information to the image fusing server, wherein the second template identification information corresponds to a second image template selected by the second terminal from the multi-image fusion template, so that the image fusing server fuses the second image and the second image template to obtain a second fused image; generating second recording information corresponding to the second terminal and the second fusion map; encrypting the second recording information to obtain second map-melting identification information; and sending the second fused image and the second fused image identification information to the second terminal so that the second terminal synthesizes the first fused image and the second fused image to obtain a target synthetic image.
And S307, receiving image synthesis information shared by the second terminal, wherein the image synthesis information carries the second map-melting identification information and the first map-melting identification information.
S309, obtaining the target synthetic image based on the second map-melting identification information and the first map-melting identification information, and displaying the target synthetic image.
In this embodiment of the present invention, before sending the first image and the first template identification information to the image-blending server, the method may further include:
and accessing target information and analyzing the target information.
And if the target information does not carry the target map-fusing identification information, executing a step of sending the first image and the first template identification information to a map-fusing server.
If the target information carries target map fusing identification information, acquiring target fusing information corresponding to the target map fusing identification information from the map fusing identification information base; the map fusing identification information base stores fusion information, map fusing identification information, template identification information, image templates, mapping relations between the fusion information and the map fusing identification information, and mapping relations between the template identification information and the image templates.
And judging the type of the target information according to the target fusion information.
If the type of the target information is the close-up request information, responding to the close-up request information to generate a synthetic image corresponding to the target information; or not responding to the close-up shooting request information, and executing the step of sending the first image and the first template identification information to the image fusing server.
And if the type of the target information is the non-close-shot request information, executing a step of sending the first image and the first template identification information to the image fusing server.
Hereinafter, the image processing method according to the embodiment of the present invention is described with a second terminal as an execution subject, and as shown in fig. 12, the method may include:
s401, receiving first close-up request information shared by a first terminal, wherein the first close-up request information carries first fused image identification information; the first fused image identification information is obtained by encrypting first record information generated based on the first terminal and a first fused image by a fused image server, the first fused image is obtained by fusing a first image and a first image template selected from a multi-image fusion template by the first terminal by the fused image server, first template identification information corresponding to the first image template and the first image are sent to the fused image server by the first terminal, and the multi-image fusion template comprises a plurality of image templates.
S403, acquiring the first map fusing based on the first map fusing identification information.
S405, sending a second image and second template identification information to the image fusing server, wherein the second template identification information corresponds to a second image template selected by a local terminal from the multi-image fusion template, so that the image fusing server fuses the second image and the second image template to obtain a second fused image; generating second recording information corresponding to the second terminal and the second fusion map; and encrypting the second recording information to obtain second map-melting identification information.
And S407, receiving the second map fusing and the second map fusing identification information sent by the map fusing server.
And S409, synthesizing the first fused image and the second fused image to obtain a target synthesized image.
S4011, image synthesis information is shared with the first terminal, and the image synthesis information carries the second map-melting identification information and the first map-melting identification information, so that the first terminal obtains the target synthesis image based on the second map-melting identification information and the first map-melting identification information, and displays the target synthesis image.
In this embodiment of the present invention, if the multi-image fusion template includes at least three image templates, after the synthesizing the first fusion image and the second fusion image to obtain the target synthesized image, the method may further include:
sharing second snapshot request information to the first terminal or other terminals, wherein the second snapshot request carries the second fused image identification information and the first fused image identification information, so that the first terminal or other terminals respond to the second snapshot request information and generate a synthetic image corresponding to the second snapshot request information; or the like, or, alternatively,
and the first terminal or other terminals do not respond to the second photographing request information and send new images and new template identification information to the image fusing server, wherein the new template identification information corresponds to new image templates selected by the first terminal or other terminals from the new multi-image fusing templates.
In one possible embodiment, the first fuse map identification information, the second fuse map identification information, the target fuse map identification information, the first record information, the second record information, and the like may also be stored in the block chain system. Referring To fig. 13, fig. 13 is an optional structural diagram of the blockchain system according To the embodiment of the present invention, a point-To-point (P2P, Peer To Peer) network is formed among a plurality of nodes, and the P2P Protocol is an application layer Protocol operating on a Transmission Control Protocol (TCP). In the blockchain system, any machine such as a server and a terminal can be added to become a node, and the node comprises a hardware layer, a middle layer, an operating system layer and an application layer.
Referring to the functions of each node in the blockchain system shown in fig. 13, the functions involved include:
1) routing, a basic function that a node has, is used to support communication between nodes.
Besides the routing function, the node may also have the following functions:
2) the application is used for being deployed in a block chain, realizing specific services according to actual service requirements, recording data related to the realization functions to form recording data, carrying a digital signature in the recording data to represent a source of task data, and sending the recording data to other nodes in the block chain system, so that the other nodes add the recording data to a temporary block when the source and integrity of the recording data are verified successfully.
3) And the Block chain comprises a series of blocks (blocks) which are mutually connected according to the generated chronological order, new blocks cannot be removed once being added into the Block chain, and recorded data submitted by nodes in the Block chain system are recorded in the blocks.
Referring to fig. 14, fig. 14 is an alternative diagram of a Block Structure (Block Structure) according to an embodiment of the present invention, where each Block includes a hash value of a transaction record (hash value of the Block) stored in the Block and a hash value of a previous Block, and the blocks are connected by the hash values to form a Block chain. The block may include information such as a time stamp at the time of block generation. A block chain (Blockchain), which is essentially a decentralized database, is a string of data blocks associated by using cryptography, and each data block contains related information for verifying the validity (anti-counterfeiting) of the information and generating a next block.
As shown in fig. 15, an embodiment of the present invention provides an image processing apparatus, which may include:
the first receiving module 501 may be configured to receive a first image and first template identification information sent by a first terminal, where the first template identification information corresponds to a first image template selected by the first terminal from a multi-image fusion template, and the multi-image fusion template includes multiple image templates.
The first fusion module 503 may be configured to be a first fusion module, and configured to fuse the first image and the first image template to obtain a first fusion image.
The first generating module 505 may be configured to generate first recorded information corresponding to the first terminal and the first map.
The first encryption module 507 may be configured to encrypt the first recording information to obtain first fuse map identification information.
A first sending module 509, configured to send the first map fusion identifier information and the first map fusion to the first terminal, so that the first terminal shares first co-shooting request information with a second terminal, where the first co-shooting request information carries the first map fusion identifier information; and enabling the second terminal to acquire the first map based on the first map-melting identification information.
The second receiving module 5011 may be configured to receive a second image and second template identification information sent by the second terminal, where the second template identification information corresponds to a second image template selected by the second terminal from the multi-image fusion template.
The second fusing module 5013 may be configured to fuse the second image and the second image template to obtain a second fused image.
The second generating module 5015 may be configured to generate second record information corresponding to the second terminal and the second thumbnail.
The second encryption module 5017 may be configured to encrypt the second recording information to obtain second map identification information.
The second sending module 5019 may be configured to send the second merged image and the second merged image identifier information to the second terminal, so that the second terminal synthesizes the first merged image and the second merged image to obtain a target synthesized image; and sharing image synthesis information carrying the second map-melting identification information and the first map-melting identification information to the first terminal, so that the first terminal obtains the target synthetic image based on the image synthesis information and displays the target synthetic image.
As shown in fig. 16, an embodiment of the present invention provides an image processing apparatus, which may include:
a third sending module 601, configured to send a first image and first template identification information to a map fusing server, where the first template identification information corresponds to a first image template selected by a local terminal from a multi-image fusion template including multiple image templates, so that the map fusing server fuses the first image and the first image template to obtain a first fused map; generating first record information corresponding to the first terminal and the first map; and encrypting the first recording information to obtain first mapping identification information.
The third receiving module 603 may be configured to receive the first map-fusing identification information and the first map-fusing sent by the map-fusing server.
A first sharing module 605, configured to share first snapshot request information to a second terminal, where the first snapshot request information carries the first mapping identifier information, so that the second terminal obtains the first mapping based on the first mapping identifier information; sending a second image and second template identification information to the image fusing server, wherein the second template identification information corresponds to a second image template selected by the second terminal from the multi-image fusion template, so that the image fusing server fuses the second image and the second image template to obtain a second fused image; generating second recording information corresponding to the second terminal and the second fusion map; encrypting the second recording information to obtain second map-melting identification information; and sending the second fused image and the second fused image identification information to the second terminal so that the second terminal synthesizes the first fused image and the second fused image to obtain a target synthetic image.
The image composition information receiving module 607 may be configured to receive image composition information shared by the second terminal, where the image composition information carries the second map-fusing identification information and the first map-fusing identification information.
The displaying module 609 may be configured to obtain the target synthetic image based on the second map-fusing identification information and the first map-fusing identification information, and display the target synthetic image.
As shown in fig. 17, an embodiment of the present invention provides an image processing apparatus, which may include:
a fourth receiving module 701, configured to receive first snapshot request information shared by a first terminal, where the first snapshot request information carries first map fusion identification information; the first fused image identification information is obtained by encrypting first record information generated based on the first terminal and a first fused image by a fused image server, the first fused image is obtained by fusing a first image and a first image template selected from a multi-image fusion template by the first terminal by the fused image server, first template identification information corresponding to the first image template and the first image are sent to the fused image server by the first terminal, and the multi-image fusion template comprises a plurality of image templates.
The obtaining module 703 may be configured to obtain the first map based on the first map identification information.
A fourth sending module 705, configured to send a second image and second template identification information to the map fusing server, where the second template identification information corresponds to a second image template selected by a local terminal from the multi-image fusion template, so that the map fusing server fuses the second image and the second image template to obtain a second fused map; generating second recording information corresponding to the second terminal and the second fusion map; and encrypting the second recording information to obtain second map-melting identification information.
A fifth receiving module 707, configured to receive the second merged map and the second merged map identification information sent by the merged map server.
The synthesizing module 709 may be configured to synthesize the first merged image and the second merged image to obtain a target synthesized image.
The second sharing module 7011 may be configured to share image synthesis information with the first terminal, where the image synthesis information carries the second mapping id information and the first mapping id information, so that the first terminal obtains the target synthesis image based on the second mapping id information and the first mapping id information, and displays the target synthesis image.
An embodiment of the present invention provides an image processing system, which may include: the system comprises a first terminal, a map-fusing server and a second terminal;
the first terminal may be configured to send a first image and first template identification information to a fused image server, where the first template identification information corresponds to a first image template selected by the first terminal from multiple image fusion templates, and the multiple image fusion template includes multiple image templates; the first map fusing server is used for receiving a first map fusing and first map fusing identification information sent by the map fusing server; the first terminal is used for sharing first close-shooting request information to a second terminal, and the first close-shooting request information carries the first fused image identification information; the image synthesis information is used for receiving image synthesis information shared by a second terminal, and the image synthesis information carries the second map fusion identification information and the first map fusion identification information; and the image processing device is used for acquiring the target synthetic image based on the second map-melting identification information and the first map-melting identification information and displaying the target synthetic image.
The image fusion server may be configured to fuse the first image and the first image template to obtain a first fusion image; the first terminal is used for generating first record information corresponding to the first fusion graph; the first recorded information is encrypted to obtain first map-melting identification information; the first map-fusing device is used for sending the first map-fusing identification information and the first map-fusing to the first terminal; the second template identification information corresponds to a second image template selected by the second terminal from the multi-image fusion template; the second image template is used for fusing the second image to obtain a second fused image; the second terminal is used for generating second record information corresponding to the second terminal and the second fusion map; the second recorded information is encrypted to obtain second map-melting identification information; and the second terminal is used for sending the second map and the second map identification information to the second terminal.
The second terminal may be configured to obtain the first map based on the first map-fusing identification information; the image fusion server is used for sending a second image and second template identification information to the image fusion server; the fusion image generation device is used for generating a first fusion image and a second fusion image; and the image synthesis information is used for sharing the image synthesis information to the first terminal, and the second mapping identification information and the first mapping identification information are carried in the image synthesis information.
The embodiment of the present invention further provides an apparatus for image processing, where the apparatus includes a processor and a memory, where the memory stores at least one instruction or at least one program, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the image processing method provided in the above method embodiment.
The embodiment of the present invention further provides a storage medium, which can be disposed in a terminal to store at least one instruction or at least one program for implementing an image processing method in the method embodiment, where the at least one instruction or at least one program is loaded and executed by the processor to implement the image processing method provided in the method embodiment.
The image processing method, the device, the equipment and the storage medium provided by the embodiment of the invention perform face fusion by uploading a face image and image template information by a user, generate a fusion image, store the fusion image in a fusion image identification information base, generate a link with a unique itemid or a two-dimensional code image to perform social contact sharing, obtain fusion image information of the user and participate in a co-shooting request initiated by the user after a friend or non-friend user accesses the shared link or identifies the two-dimensional code, generate a co-frame co-shooting image among multiple people, finally realize a multi-pair and multi-number social friend co-frame fusion image co-shooting effect and perform diffusion propagation effect, thereby improving the interestingness and social interaction of the fusion image and ensuring better user experience.
Alternatively, in the present specification embodiment, the storage medium may be located at least one network server among a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
The memory according to the embodiments of the present disclosure may be used to store software programs and modules, and the processor may execute various functional applications and data processing by operating the software programs and modules stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs needed by functions and the like; the storage data area may store data created according to use of the apparatus, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the processor access to the memory.
The embodiment of the image processing method provided by the embodiment of the invention can be executed in a mobile terminal, a computer terminal, a server or a similar arithmetic device. Taking the example of the server running on the server, fig. 18 is a hardware structure block diagram of the server of the image processing method according to the embodiment of the present invention. As shown in fig. 18, the server 800 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 810 (the processor 810 may include but is not limited to a Processing device such as a microprocessor MCU or a programmable logic device FPGA), a memory 830 for storing data, one or more storage media 820 (e.g., one or more mass storage devices) for storing applications 823 or data 822. Memory 830 and storage medium 820 may be, among other things, transient or persistent storage. The program stored in storage medium 820 may include one or more modules, each of which may include a series of instruction operations for a server. Still further, the central processor 810 may be configured to communicate with the storage medium 820 to execute a series of instruction operations in the storage medium 820 on the server 800. The server 800 may also include one or more power supplies 860, one or more wired or wireless network interfaces 850, one or more input-output interfaces 840, and/or one or more operating systems 821, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and so forth.
The input-output interface 840 may be used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the server 800. In one example, i/o Interface 840 includes a Network adapter (NIC) that may be coupled to other Network devices via a base station to communicate with the internet. In one example, the input/output interface 840 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
It will be understood by those skilled in the art that the structure shown in fig. 18 is merely an illustration and is not intended to limit the structure of the electronic device. For example, server 800 may also include more or fewer components than shown in FIG. 18, or have a different configuration than shown in FIG. 18.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the device and server embodiments, since they are substantially similar to the method embodiments, the description is simple, and the relevant points can be referred to the partial description of the method embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. An image processing method, characterized in that the method comprises:
receiving a first image and first template identification information sent by a first terminal, wherein the first template identification information corresponds to a first image template selected by the first terminal from a multi-image fusion template, and the multi-image fusion template comprises a plurality of image templates;
fusing the first image and the first image template to obtain a first fused image;
generating first record information corresponding to the first terminal and the first map;
encrypting the first recording information to obtain first map-melting identification information;
sending the first map fusing identification information and the first map fusing to the first terminal so that the first terminal shares first co-shooting request information to a second terminal, wherein the first co-shooting request information carries the first map fusing identification information; and enabling the second terminal to acquire the first map based on the first map-melting identification information;
receiving a second image and second template identification information sent by the second terminal, wherein the second template identification information corresponds to a second image template selected by the second terminal from the multi-image fusion template;
fusing the second image and the second image template to obtain a second fused image;
generating second recording information corresponding to the second terminal and the second fusion map;
encrypting the second recording information to obtain second map-melting identification information;
sending the second fused image and the second fused image identification information to the second terminal, so that the second terminal synthesizes the first fused image and the second fused image to obtain a target synthesized image; and sharing image synthesis information carrying the second map-melting identification information and the first map-melting identification information to the first terminal, so that the first terminal obtains the target synthetic image based on the image synthesis information and displays the target synthetic image.
2. An image processing method, characterized in that the method comprises:
sending a first image and first template identification information to a fused image server, wherein the first template identification information corresponds to a first image template selected by a local terminal from a multi-image fusion template comprising a plurality of image templates, so that the fused image server fuses the first image and the first image template to obtain a first fused image; generating first record information corresponding to the first terminal and the first map; encrypting the first recording information to obtain first map-melting identification information;
receiving the first map-melting identification information and the first map-melting sent by the map-melting server;
sharing first co-shooting request information to a second terminal, wherein the first co-shooting request information carries the first map fusing identification information, so that the second terminal obtains the first map fusing based on the first map fusing identification information; sending a second image and second template identification information to the image fusing server, wherein the second template identification information corresponds to a second image template selected by the second terminal from the multi-image fusion template, so that the image fusing server fuses the second image and the second image template to obtain a second fused image; generating second recording information corresponding to the second terminal and the second fusion map; encrypting the second recording information to obtain second map-melting identification information; sending the second fused image and the second fused image identification information to the second terminal so that the second terminal synthesizes the first fused image and the second fused image to obtain a target synthesized image;
receiving image synthesis information shared by the second terminal, wherein the image synthesis information carries the second map-fusing identification information and the first map-fusing identification information;
and acquiring the target synthetic image based on the second map-melting identification information and the first map-melting identification information, and displaying the target synthetic image.
3. The method of claim 2, wherein prior to said sending the first image and the first template identification information to the cartographic server, the method further comprises:
accessing target information and analyzing the target information;
if the target information does not carry target map-fusing identification information, executing a step of sending a first image and first template identification information to a map-fusing server;
if the target information carries target map fusing identification information, acquiring target fusing information corresponding to the target map fusing identification information from the map fusing identification information base; the map fusing identification information base stores fusion information, map fusing identification information, template identification information, image templates, mapping relations between the fusion information and the map fusing identification information and mapping relations between the template identification information and the image templates;
judging the type of the target information according to the target fusion information;
if the type of the target information is the close-up request information, responding to the close-up request information to generate a synthetic image corresponding to the target information; or not responding to the close-up shooting request information, and executing the step of sending the first image and the first template identification information to the image fusing server;
and if the type of the target information is the non-close-shot request information, executing a step of sending the first image and the first template identification information to the image fusing server.
4. An image processing method, characterized in that the method comprises:
receiving first close-up request information shared by a first terminal, wherein the first close-up request information carries first map fusing identification information; the first fused image identification information is obtained by encrypting first record information generated based on the first terminal and a first fused image by a fused image server, the first fused image is obtained by fusing a first image and a first image template selected from multi-image fused templates by the first terminal through the fused image server, the first template identification information corresponding to the first image template and the first image are sent to the fused image server through the first terminal, and the multi-image fused template comprises a plurality of image templates;
acquiring the first map fusion based on the first map fusion identification information;
sending a second image and second template identification information to the image fusing server, wherein the second template identification information corresponds to a second image template selected by a local terminal from the multi-image fusion template, so that the image fusing server fuses the second image and the second image template to obtain a second fused image; generating second recording information corresponding to the second terminal and the second fusion map; encrypting the second recording information to obtain second map-melting identification information;
receiving the second map and the second map identification information sent by the map fusing server;
synthesizing the first fusion image and the second fusion image to obtain a target synthesized image;
sharing image synthesis information to the first terminal, wherein the image synthesis information carries the second map-melting identification information and the first map-melting identification information, so that the first terminal obtains the target synthesis image based on the second map-melting identification information and the first map-melting identification information, and displays the target synthesis image.
5. The method according to claim 4, wherein if the multi-image fusion template includes at least three image templates, after the synthesizing the first fusion map and the second fusion map to obtain a target synthesized image, the method further comprises:
sharing second close-up request information to the first terminal or other terminals, wherein the second close-up request carries the second fused image identification information and the first fused image identification information, so that the first terminal or other terminals respond to the second close-up request information and generate a synthetic image corresponding to the second close-up request information; or the like, or, alternatively,
and sending a new image and new template identification information to the image fusion server by the first terminal or the other terminals without responding to the second image fusion request information, wherein the new template identification information corresponds to a new image template selected by the first terminal or the other terminals from the new multi-image fusion template.
6. An image processing apparatus, characterized in that the apparatus comprises:
the system comprises a first receiving module, a second receiving module and a processing module, wherein the first receiving module is used for receiving a first image and first template identification information sent by a first terminal, the first template identification information corresponds to a first image template selected by the first terminal from a multi-image fusion template, and the multi-image fusion template comprises a plurality of image templates;
the first fusion module is used for fusing the first image and the first image template to obtain a first fusion image;
the first generating module is used for generating first record information corresponding to the first terminal and the first map;
the first encryption module is used for encrypting the first recording information to obtain first map-melting identification information;
a first sending module, configured to send the first fused map identification information and the first fused map to the first terminal, so that the first terminal shares first co-shooting request information to a second terminal, where the first co-shooting request information carries the first fused map identification information; and enabling the second terminal to acquire the first map based on the first map-melting identification information;
a second receiving module, configured to receive a second image and second template identification information sent by the second terminal, where the second template identification information corresponds to a second image template selected by the second terminal from the multi-image fusion template;
the second fusion module is used for fusing the second image and the second image template to obtain a second fusion image;
the second generating module is used for generating second recording information corresponding to the second terminal and the second map;
the second encryption module is used for encrypting the second recording information to obtain second map-melting identification information;
a second sending module, configured to send the second merged image and the second merged image identification information to the second terminal, so that the second terminal synthesizes the first merged image and the second merged image to obtain a target synthesized image; and sharing image synthesis information carrying the second map-melting identification information and the first map-melting identification information to the first terminal, so that the first terminal obtains the target synthetic image based on the image synthesis information and displays the target synthetic image.
7. An image processing apparatus, characterized in that the apparatus comprises:
a third sending module, configured to send a first image and first template identification information to a map fusing server, where the first template identification information corresponds to a first image template selected by a local terminal from a multi-image fusing template including multiple image templates, so that the map fusing server fuses the first image and the first image template to obtain a first fused map; generating first record information corresponding to the first terminal and the first map; encrypting the first recording information to obtain first map-melting identification information;
a third receiving module, configured to receive the first map-fusing identification information and the first map-fusing sent by the map-fusing server;
the first sharing module is configured to share first close-up request information to a second terminal, where the first close-up request information carries the first map-fusing identification information, so that the second terminal obtains the first map-fusing based on the first map-fusing identification information; sending a second image and second template identification information to the image fusing server, wherein the second template identification information corresponds to a second image template selected by the second terminal from the multi-image fusion template, so that the image fusing server fuses the second image and the second image template to obtain a second fused image; generating second recording information corresponding to the second terminal and the second fusion map; encrypting the second recording information to obtain second map-melting identification information; sending the second fused image and the second fused image identification information to the second terminal so that the second terminal synthesizes the first fused image and the second fused image to obtain a target synthesized image;
the image synthesis information receiving module is used for receiving image synthesis information shared by the second terminal, wherein the image synthesis information carries the second map fusion identification information and the first map fusion identification information;
and the display module is used for acquiring the target synthetic image based on the second map-melting identification information and the first map-melting identification information and displaying the target synthetic image.
8. An image processing apparatus, characterized in that the apparatus comprises:
the fourth receiving module is used for receiving first close-up request information shared by a first terminal, wherein the first close-up request information carries first map fusing identification information; the first fused image identification information is obtained by encrypting first record information generated based on the first terminal and a first fused image by a fused image server, the first fused image is obtained by fusing a first image and a first image template selected from multi-image fused templates by the first terminal through the fused image server, the first template identification information corresponding to the first image template and the first image are sent to the fused image server through the first terminal, and the multi-image fused template comprises a plurality of image templates;
the acquisition module is used for acquiring the first map fusion based on the first map fusion identification information;
a fourth sending module, configured to send a second image and second template identification information to the fused image server, where the second template identification information corresponds to a second image template selected by the local terminal from the multi-image fused templates, so that the fused image server fuses the second image and the second image template to obtain a second fused image; generating second recording information corresponding to the second terminal and the second fusion map; encrypting the second recording information to obtain second map-melting identification information;
a fifth receiving module, configured to receive the second map fusion and the second map fusion identification information sent by the map fusion server;
the synthesis module is used for synthesizing the first fusion image and the second fusion image to obtain a target synthesis image;
the second sharing module is configured to share image synthesis information with the first terminal, where the image synthesis information carries the second mapping identification information and the first mapping identification information, so that the first terminal obtains the target synthesis image based on the second mapping identification information and the first mapping identification information, and displays the target synthesis image.
9. An electronic device, characterized in that the electronic device comprises a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the image processing method according to any one of claims 1 to 5.
10. A computer-readable storage medium, in which at least one instruction or at least one program is stored, the at least one instruction or the at least one program being loaded and executed by a processor to implement the image processing method according to any one of claims 1 to 5.
CN201911299168.6A 2019-12-17 2019-12-17 Image processing method, device, equipment and storage medium Active CN110992256B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911299168.6A CN110992256B (en) 2019-12-17 2019-12-17 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911299168.6A CN110992256B (en) 2019-12-17 2019-12-17 Image processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110992256A true CN110992256A (en) 2020-04-10
CN110992256B CN110992256B (en) 2021-09-14

Family

ID=70094372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911299168.6A Active CN110992256B (en) 2019-12-17 2019-12-17 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110992256B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881438A (en) * 2020-08-14 2020-11-03 支付宝(杭州)信息技术有限公司 Method and device for carrying out biological feature recognition based on privacy protection and electronic equipment
CN112004034A (en) * 2020-09-04 2020-11-27 北京字节跳动网络技术有限公司 Method and device for close photographing, electronic equipment and computer readable storage medium
WO2022213798A1 (en) * 2021-04-08 2022-10-13 北京字跳网络技术有限公司 Image processing method and apparatus, and electronic device and storage medium
WO2023011318A1 (en) * 2021-08-04 2023-02-09 北京字跳网络技术有限公司 Media file processing method and apparatus, device, readable storage medium, and product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580930A (en) * 2013-10-28 2015-04-29 腾讯科技(深圳)有限公司 Group photo taking method and system
CN104680480A (en) * 2013-11-28 2015-06-03 腾讯科技(上海)有限公司 Image processing method and device
CN105608715A (en) * 2015-12-17 2016-05-25 广州华多网络科技有限公司 Online group shot method and system
CN106355551A (en) * 2016-08-26 2017-01-25 北京金山安全软件有限公司 Jigsaw processing method and device, electronic equipment and server
CN107404617A (en) * 2017-07-21 2017-11-28 努比亚技术有限公司 A kind of image pickup method and terminal, computer-readable storage medium
JP2017225028A (en) * 2016-06-16 2017-12-21 大日本印刷株式会社 Printed matter generating apparatus and image data providing system
KR20190119212A (en) * 2018-03-30 2019-10-22 경일대학교산학협력단 System for performing virtual fitting using artificial neural network, method thereof and computer recordable medium storing program to perform the method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580930A (en) * 2013-10-28 2015-04-29 腾讯科技(深圳)有限公司 Group photo taking method and system
CN104680480A (en) * 2013-11-28 2015-06-03 腾讯科技(上海)有限公司 Image processing method and device
CN105608715A (en) * 2015-12-17 2016-05-25 广州华多网络科技有限公司 Online group shot method and system
JP2017225028A (en) * 2016-06-16 2017-12-21 大日本印刷株式会社 Printed matter generating apparatus and image data providing system
CN106355551A (en) * 2016-08-26 2017-01-25 北京金山安全软件有限公司 Jigsaw processing method and device, electronic equipment and server
CN107404617A (en) * 2017-07-21 2017-11-28 努比亚技术有限公司 A kind of image pickup method and terminal, computer-readable storage medium
KR20190119212A (en) * 2018-03-30 2019-10-22 경일대학교산학협력단 System for performing virtual fitting using artificial neural network, method thereof and computer recordable medium storing program to perform the method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GINTAUTAS PALUBINSKAS: "Framework for multi-sensor data fusion using template based matching", 《IEEE XPLORE》 *
魏璐: "基于三维形变模型的人脸替换技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881438A (en) * 2020-08-14 2020-11-03 支付宝(杭州)信息技术有限公司 Method and device for carrying out biological feature recognition based on privacy protection and electronic equipment
CN111881438B (en) * 2020-08-14 2024-02-02 支付宝(杭州)信息技术有限公司 Method and device for carrying out biological feature recognition based on privacy protection and electronic equipment
CN112004034A (en) * 2020-09-04 2020-11-27 北京字节跳动网络技术有限公司 Method and device for close photographing, electronic equipment and computer readable storage medium
WO2022048651A1 (en) * 2020-09-04 2022-03-10 北京字节跳动网络技术有限公司 Cooperative photographing method and apparatus, electronic device, and computer-readable storage medium
WO2022213798A1 (en) * 2021-04-08 2022-10-13 北京字跳网络技术有限公司 Image processing method and apparatus, and electronic device and storage medium
WO2023011318A1 (en) * 2021-08-04 2023-02-09 北京字跳网络技术有限公司 Media file processing method and apparatus, device, readable storage medium, and product

Also Published As

Publication number Publication date
CN110992256B (en) 2021-09-14

Similar Documents

Publication Publication Date Title
CN110992256B (en) Image processing method, device, equipment and storage medium
CN110809175B (en) Video recommendation method and device
US10075399B2 (en) Method and system for sharing media content between several users
CN112087652A (en) Video production method, video sharing device, electronic equipment and storage medium
WO2023045710A1 (en) Multimedia display and matching methods and apparatuses, device and medium
CN111476871A (en) Method and apparatus for generating video
JP7247587B2 (en) Image style conversion device, image style conversion method, and program
US11182639B2 (en) Systems and methods for provisioning content
CN111046198B (en) Information processing method, device, equipment and storage medium
CN111147766A (en) Special effect video synthesis method and device, computer equipment and storage medium
CN115222862A (en) Virtual human clothing generation method, device, equipment, medium and program product
CN113965773A (en) Live broadcast display method and device, storage medium and electronic equipment
CN112990370B (en) Image data processing method and device, storage medium and electronic equipment
US20220092332A1 (en) Systems and methods for provisioning content
CN114253436B (en) Page display method, device and storage medium
WO2023034721A1 (en) Per participant end-to-end encrypted metadata
CN112925595A (en) Resource distribution method and device, electronic equipment and storage medium
CN115242980B (en) Video generation method and device, video playing method and device and storage medium
CN110213061B (en) Synchronous communication method, synchronous communication device, synchronous communication apparatus, and medium
Adeniyi et al. Red: a real-time datalogging toolkit for remote experiments
EP3389281B1 (en) Systems and methods for provisioning content
KR102174569B1 (en) Method for providing augmented reality based information
CN117171438A (en) Digital image recommendation method, device, computer equipment and storage medium
EP3389282A1 (en) Systems and methods for provisioning content
CN117193607A (en) Virtual object processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40022198

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant