CN114640805A - Clapping method, electronic equipment and server - Google Patents

Clapping method, electronic equipment and server Download PDF

Info

Publication number
CN114640805A
CN114640805A CN202110312721.6A CN202110312721A CN114640805A CN 114640805 A CN114640805 A CN 114640805A CN 202110312721 A CN202110312721 A CN 202110312721A CN 114640805 A CN114640805 A CN 114640805A
Authority
CN
China
Prior art keywords
close
data
electronic device
electronic equipment
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110312721.6A
Other languages
Chinese (zh)
Inventor
刘邦洪
张璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN114640805A publication Critical patent/CN114640805A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application provides a method, electronic equipment and a server for taking a photo together, and relates to the technical field of terminal software; the method comprises the steps that a first electronic device uploads data to be photographed to a server; the data to be close-shot is video stream data of the first electronic equipment; receiving first shooting data from a server, and displaying the first shooting data; the first photo-matching data comprises a background image layer, a portrait image layer from the first electronic device and a portrait image layer from the second electronic device; obtaining position adjustment information in response to a position adjustment operation on the figure layer; sending the position adjustment information to a server; and receiving second shooting data generated by the server according to the position adjustment information, and displaying the second shooting data. The position of the user in the close-up shooting mode is adjusted more flexibly, the close-up shooting method reduces the requirement on the performance of the electronic equipment, and reduces the consumption of the equipment cost.

Description

Clapping method, electronic equipment and server
Technical Field
The present application claims priority of chinese patent application entitled "a target object position adjustment method, electronic device, and server" filed by the intellectual property office of the people's republic of china, application No. 202011384457.9, on 30/11/2020, which is incorporated herein by reference in its entirety.
The application relates to the technical field of terminals, in particular to a clapping method, electronic equipment and a server.
Background
With the improvement of the intelligent degree of the mobile phone and the improvement of the pixels of the camera of the mobile phone, the shooting requirements of the daily life of a user can be met by utilizing the mobile phone to shoot. However, in some scenes where a group photo is desired, members who cannot be present due to time, place, or the like cannot participate in the group photo.
The photo of the newly added personnel is embedded into the group photo through the technology of image Processing Software (PS), so that the problem that relatives or friends cannot participate in large group photo due to personal reasons is solved. However, according to the group photo scheme, due to the fact that the position is not reserved for photographing, the PS embedding is difficult in the later period, and the user experience is poor.
Disclosure of Invention
The application provides a photographic method, electronic equipment and a server, which are used for improving the synthesis quality of photographic image data and improving the experience of a photographic user.
In a first aspect, an embodiment of the present application provides a close-up photographing method, which is applied to a first electronic device, where the first electronic device includes at least one camera; the method comprises the following steps: the method comprises the steps that a first electronic device and a second electronic device establish a snapshot service; the method comprises the steps that a first electronic device uploads data to be photographed to a server; the data to be close-shot is video stream data of the first electronic equipment; receiving first shooting data from a server, and displaying the first shooting data; the first photo-matching data comprises a background image layer, a portrait image layer from the first electronic device and a portrait image layer from the second electronic device; responding to the position adjustment operation of the figure layer to obtain position adjustment information; sending the position adjustment information to a server; and receiving second shooting data generated by the server according to the position adjustment information, and displaying the second shooting data.
It should be noted that, in the present application, the first electronic device and the second electronic device establish a snapshot service, which may be understood that the user 1 of the first electronic device sends a snapshot request to the user 2 of the second electronic device, and the user 1 receives the snapshot request, so that the snapshot service is established between the first electronic device and the second electronic device. This application can directly perceivedly be embodied in the second closed shooting data after the position adjustment on portrait picture layer when the closed shooting, need not the reservation position when the closed shooting, the problem that the reservation position leads to later stage embedding difficulty when can not appear closed shooting. The method for the close-up photographing is more flexible and better in user interaction experience.
In one possible design, the portrait layer includes: the portrait layer of the co-shooting initiator and the portrait layer of the co-shooting invitees; the first electronic equipment belongs to a close shot initiator; the second electronic device belongs to a close-up invitee; the first electronic equipment can respond to the position movement of the snapshot initiator or the movement of the first electronic equipment to trigger the first electronic equipment to adjust the position of the portrait layer of the snapshot initiator, and position adjustment information is obtained; or triggering the first electronic device to adjust the position of the portrait layer of the co-shooting invitee in response to the position movement of the co-shooting invitee or the movement of the second electronic device, and obtaining position adjustment information.
It should be noted that, when adjusting the position, the position can be adjusted by moving the position itself, and can also be adjusted by moving the corresponding electronic device, for example: the user 1 who participates in the photographing holds the electronic device as a mobile phone, and when the user adjusts the position in the photographing, the user can move the position of the user per se and also can move the position of the mobile phone. The position is adjusted in the above mode, so that the flexibility of the adjusting mode can be increased, and the requirements of users can be better met.
In one possible design, the position adjustment information includes: scaling, front and back positions, left and right positions, and a shielding part.
In one possible design, the first coincidence data may be determined by: responding to friend selection operation of a coincidence initiator in a friend list, sending a coincidence request to a server, and returning response information after instructing the server to send the coincidence request to second electronic equipment; and if the situation that the co-shooting invitees accept the co-shooting request is determined according to the response information, uploading the data to be co-shot of the co-shooting initiator to a server, and indicating the server to generate first co-shooting data according to the data to be co-shot of the co-shooting initiator and the data to be co-shot of the co-shooting invitees.
In one possible design, the clap request includes one or more of the following: taking photos, taking videos and taking live broadcasts.
It should be noted that the close-up request of the application is not only suitable for close-up photos, but also suitable for close-up videos, close-up live broadcasts and the like, and the application range of the application is wider, the requirements of users can be met, and the experience degree of the users can be improved.
In one possible design, the first electronic device may respond to a snapshot template selection operation of a snapshot initiator, send a snapshot template construction request to the server, instruct the server to generate a snapshot template according to the snapshot template construction request, and return a construction result of the snapshot template; and if the construction of the snapshot template is determined to be successful according to the construction result of the snapshot template, prompting a friend list for selection of a snapshot initiator.
In one possible design, the close-up template selection operation includes: selecting the number of users in a close shot and selecting a background in a close shot; the close-up background comprises: a self-portrait context of the coaptation initiator and a self-portrait context of the coaptation invitees.
In one possible design, after receiving the second snapshot data from the server, the server sends a snapshot confirmation request after responding to a snapshot confirmation operation triggered by the snapshot initiator.
In one possible design, the beat-to-beat confirmation operation is triggered by one or more of the following: and clicking a close-shot confirmation key by the close-shot initiator, maintaining the preset expression of the close-shot initiator and the close-shot invitee, and keeping the close-shot initiator and the close-shot invitee for action for a preset time.
In a second aspect, an embodiment of the present application provides a close-shooting method, applied to a server, including: respectively receiving data to be clapped from first electronic equipment and data to be clapped from second electronic equipment; the first electronic equipment and the second electronic equipment establish a clap service; the first electronic equipment and the second electronic equipment comprise at least one camera; the data to be photographed of the first electronic equipment is video stream data of the first electronic equipment; the data to be photographed of the second electronic equipment is video stream data of the second electronic equipment; synthesizing data to be close-photographed of the first electronic device and the second electronic device into first close-photographing data; the first photo-matching data comprises a background image layer, a portrait image layer from the first electronic device and a portrait image layer from the second electronic device; sending the first close-shot data to the first electronic equipment and the second electronic equipment; receiving position adjustment information, and adjusting the first close-shot data according to the position adjustment information to obtain second close-shot data; the position adjustment information is determined in response to a position adjustment operation on the portrait layer; and sending the second snapshot data to the first electronic device and the second electronic device.
It should be noted that in the application, the position of the portrait graphic layer can be visually embodied in the second close-up data after being adjusted, a reserved position is not required during close-up, and the problem that the later-stage embedding is difficult due to the fact that the position is not reserved during close-up is avoided. The method for the snapshot is more flexible and better in user interaction experience.
In one possible design, the server may separate a portrait layer from a background layer in the first snapshot data; and adjusting the position of the portrait image layer according to the position adjustment information to obtain second close-up data.
In one possible design, the portrait layer includes: the portrait layer of the co-shooting initiator and the portrait layer of the co-shooting invitees; the first electronic equipment belongs to a close shot initiator; the second electronic device belongs to a co-shooting invitee; the server can acquire the equipment parameters of the first electronic equipment and the equipment parameters of the second electronic equipment; and adjusting the second close-shot data into image data matched with the equipment parameters of the first electronic equipment, sending the image data to the first electronic equipment, adjusting the second close-shot data into image data matched with the equipment parameters of the second electronic equipment, and sending the image data to the second electronic equipment.
In one possible design, the server may send a close-up request to the second electronic device after receiving the close-up request from the first electronic device; the auction request is triggered by friend selection operation of an initiator in the friend list; and receiving response information from the second electronic equipment, and sending the response information to the first electronic equipment.
In one possible design, the clap request includes one or more of the following: taking photos, taking videos and taking live broadcasts.
In one possible design, the server may receive a construction request of the clap template from the first electronic device and determine a construction result of the clap template; the method comprises the steps that a close-shot template construction request is triggered by first electronic equipment responding to a close-shot template selection operation of a close-shot initiator; and returning the construction result of the snap template to the first electronic equipment.
In one possible design, the close-up template selection operation includes: selecting the number of users in a close shot and selecting a background in a close shot; the close-up background comprises: a self-portrait context of the coaptation initiator and a self-portrait context of the coaptation invitees.
In one possible design, the server may receive a close-up confirmation request from the first electronic device, and generate close-up confirmation data; the close-up confirmation request is triggered by the first electronic device responding to the close-up confirmation operation of the close-up initiator after the first electronic device receives the second close-up data from the server.
In one possible design, the beat-to-beat confirmation operation is triggered by one or more of the following: and clicking a close-shot confirmation key by the close-shot initiator, maintaining the preset expression of the close-shot initiator and the close-shot invitee, and keeping the close-shot initiator and the close-shot invitee for action for a preset time.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; one or more memories; wherein the one or more memories store one or more computer programs, the one or more computer programs comprising instructions, which when executed by the one or more processors, cause the first electronic device to perform the steps of:
the method comprises the steps that a first electronic device and a second electronic device establish a snapshot service; the method comprises the steps that a first electronic device uploads data to be photographed to a server; the data to be close-shot is video stream data of the first electronic equipment; receiving first shooting data from a server, and displaying the first shooting data; the first photo-matching data comprises a background image layer, a portrait image layer from the first electronic device and a portrait image layer from the second electronic device; obtaining position adjustment information in response to a position adjustment operation on the figure layer; sending the position adjustment information to a server; and receiving second shooting data generated by the server according to the position adjustment information, and displaying the second shooting data.
It should be noted that in the application, the position of the portrait graphic layer can be visually embodied in the second close-up data after being adjusted, a reserved position is not required during close-up, and the problem that the later-stage embedding is difficult due to the fact that the position is not reserved during close-up is avoided. The method for the close-up photographing is more flexible and better in user interaction experience.
In one possible design, the portrait layer includes: the portrait layer of the co-shooting initiator and the portrait layer of the co-shooting invitees; the first electronic equipment belongs to a close shot initiator; the second electronic device belongs to a close-up invitee; the first electronic equipment can respond to the position movement of the snapshot initiator or the movement of the first electronic equipment to trigger the first electronic equipment to adjust the position of the portrait layer of the snapshot initiator, and position adjustment information is obtained; or triggering the first electronic device to adjust the position of the portrait layer of the co-shooting invitee in response to the position movement of the co-shooting invitee or the movement of the second electronic device, and obtaining position adjustment information.
It should be noted that, when adjusting the position, the position can be adjusted by moving the position itself, and can also be adjusted by moving the corresponding electronic device, for example: the user 1 who participates in the auction holds the electronic device as a mobile phone, and when the user adjusts the position in the auction, the user can move the position of the user and can also move the position of the mobile phone. The position is adjusted in the above mode, so that the flexibility of the adjusting mode can be increased, and the requirements of users can be better met.
In one possible design, the position adjustment information includes: scaling, front and back positions, left and right positions, and a shielding part.
In one possible design, the first coincidence data may be determined by: responding to friend selection operation of a snap initiator in a friend list, sending a snap request to a server, and returning response information after instructing the server to send the snap request to second electronic equipment; and if the situation that the co-shooting invitees accept the co-shooting request is determined according to the response information, uploading the data to be co-shot of the co-shooting initiator to a server, and indicating the server to generate first co-shooting data according to the data to be co-shot of the co-shooting initiator and the data to be co-shot of the co-shooting invitees.
In one possible design, the clap request includes one or more of the following: taking photos, taking videos and taking live broadcasts.
It should be noted that the co-shooting request of the present application is not only applicable to co-shooting photos, but also applicable to co-shooting videos, co-shooting live broadcasts, etc., and the application range of the present application is wider, and the present application can better satisfy the requirements of users, and can improve the experience of users.
In one possible design, the first electronic device can respond to the selection operation of the close-shot template of the close-shot initiator, send a close-shot template construction request to the server, instruct the server to generate the close-shot template according to the close-shot template construction request, and return a construction result of the close-shot template; and if the construction of the snap template is determined to be successful according to the construction result of the snap template, prompting a friend list for a snap initiator to select.
In one possible design, the close-up template selection operation includes: selecting the number of users in a close shot and selecting a background in a close shot; the close-up background comprises: a self-portrait context of the coaptation initiator and a self-portrait context of the coaptation invitees.
In one possible design, after receiving the second snapshot data from the server, the server sends a snapshot confirmation request after responding to a snapshot confirmation operation triggered by the snapshot initiator.
In one possible design, the beat-to-beat validation operation is triggered by one or more of the following: and clicking a close-shot confirmation key by the close-shot initiator, maintaining the preset expression of the close-shot initiator and the close-shot invitee, and keeping the close-shot initiator and the close-shot invitee for action for a preset time.
Fourth aspect an embodiment of the present application provides a server, including: one or more processors; one or more memories; wherein the one or more memories store one or more computer programs, the one or more computer programs comprising instructions, which when executed by the one or more processors, cause the server to perform the steps of:
respectively receiving data to be clapped from first electronic equipment and data to be clapped from second electronic equipment; the first electronic equipment and the second electronic equipment establish a clap service; the first electronic equipment and the second electronic equipment comprise at least one camera; the data to be synchronized of the first electronic equipment is video stream data of the first electronic equipment; the data to be photographed of the second electronic equipment is video stream data of the second electronic equipment; synthesizing data to be close-photographed of the first electronic device and the second electronic device into first close-photographing data; the first photo-matching data comprises a background image layer, a portrait image layer from the first electronic device and a portrait image layer from the second electronic device; sending the first photographing data to the first electronic equipment and the second electronic equipment; receiving position adjusting information, and adjusting the first close-shot data according to the position adjusting information to obtain second close-shot data; the position adjustment information is determined in response to a position adjustment operation on the portrait image layer; and sending the second snapshot data to the first electronic device and the second electronic device.
It should be noted that in the application, the position of the portrait graphic layer can be visually embodied in the second close-up data after being adjusted, a reserved position is not required during close-up, and the problem that the later-stage embedding is difficult due to the fact that the position is not reserved during close-up is avoided. The method for the close-up photographing is more flexible and better in user interaction experience.
In one possible design, the server may separate a portrait layer from a background layer in the first snapshot data; and adjusting the position of the portrait image layer according to the position adjustment information to obtain second close-up data.
In one possible design, the portrait layer includes: the portrait layer of the co-shooting initiator and the portrait layer of the co-shooting invitees; the first electronic equipment belongs to a close shot initiator; the second electronic device belongs to a co-shooting invitee; the server can acquire the equipment parameters of the first electronic equipment and the equipment parameters of the second electronic equipment; and adjusting the second close-shot data into image data matched with the equipment parameters of the first electronic equipment, sending the image data to the first electronic equipment, adjusting the second close-shot data into image data matched with the equipment parameters of the second electronic equipment, and sending the image data to the second electronic equipment.
In one possible design, the server may send the close-up request to the second electronic device after receiving the close-up request from the first electronic device; the auction request is triggered by friend selection operation of the auction initiator in the friend list; and receiving response information from the second electronic equipment, and sending the response information to the first electronic equipment.
In one possible design, the clap request includes one or more of the following: taking photos, taking videos and taking live broadcasts.
In one possible design, the server may receive a snap template construction request from the first electronic device, and determine a construction result of the snap template; the method comprises the steps that a close-shot template construction request is triggered by first electronic equipment responding to a close-shot template selection operation of a close-shot initiator; and returning the construction result of the snap template to the first electronic equipment.
In one possible design, the close-up template selection operation includes: selecting the number of users in a close shot and selecting a background in a close shot; the close-up background comprises: a self-portrait context of the co-shooting initiator and a self-portrait context of the co-shooting invitees.
In one possible design, the server may receive a close-up confirmation request from the first electronic device, and generate close-up confirmation data; the close-up confirmation request is triggered by the first electronic device responding to the close-up confirmation operation of the close-up initiator after the first electronic device receives the second close-up data from the server.
In one possible design, the beat-to-beat validation operation is triggered by one or more of the following: and clicking a snapshot confirming key by the snapshot initiator, maintaining the preset expression of the snapshot initiator and the snapshot invitee, and continuously keeping the actions of the snapshot initiator and the snapshot invitee for a preset time.
In a fifth aspect, the present application provides a computer-readable storage medium, in which computer-readable instructions are stored, and when the computer-readable instructions are read and executed by a computer, the computer is enabled to execute the solution described in any implementation manner of the first aspect or the second aspect.
In a sixth aspect, the present application provides a computer program product, which when read and executed by a computer, causes the computer to execute the solution according to any implementation manner of the first aspect or the second aspect.
In a seventh aspect, an embodiment of the present application further provides a chip, where the chip is coupled with a memory in an electronic device, and is configured to call a computer program stored in the memory and execute a technical solution of any one of the first aspect and the first possible design of the embodiment of the present application, or a technical solution of any one of the second aspect and the second possible design of the second aspect, where "coupled" in the embodiment of the present application means that two components are directly or indirectly combined with each other.
In an eighth aspect, there is also provided a graphical user interface on an electronic device with a display screen, one or more memories, and one or more processors to execute one or more computer programs stored in the one or more memories, the graphical user interface comprising a graphical user interface displayed when the electronic device performs a method as provided in the first or second aspects above.
For technical effects that can be achieved in the second aspect to the eighth aspect, please refer to the description of the technical effects that can be achieved by the corresponding possible design scheme in the first aspect, and the detailed description is omitted here.
Drawings
Fig. 1A is a schematic diagram illustrating a close-up scene provided by an embodiment of the present application;
fig. 1B is a schematic view of an application scenario of a method for generating a snapshot according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a login interface provided by an embodiment of the present application;
fig. 3 is a schematic flowchart illustrating a login authentication method according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a snap template selection interface provided by an embodiment of the present application;
FIG. 5 is a diagram illustrating an image layer template provided in an embodiment of the present application;
FIG. 6A is a schematic diagram illustrating a snap template selection interface provided by an embodiment of the present application;
FIG. 6B is a schematic diagram illustrating a snap template selection interface provided by an embodiment of the present application;
fig. 7 is a flowchart illustrating a method for determining a close-up template according to an embodiment of the present disclosure;
FIG. 8 is a diagram illustrating a friend invitation interface provided by an embodiment of the application;
figure 9 shows a schematic diagram of an interface for an invited user to accept an invitation provided by an embodiment of the application;
fig. 10 is a schematic flowchart illustrating a method for generating a beat according to an embodiment of the present application;
FIG. 11 is a schematic diagram illustrating an interface for adjusting the close-up position according to an embodiment of the present disclosure;
FIG. 12 is a diagram illustrating a close-up preview stream provided by an embodiment of the application;
FIG. 13 is a diagram illustrating a close-up preview stream provided by an embodiment of the application;
FIG. 14 is a diagram illustrating adjustment of a group preview stream according to an embodiment of the present application;
FIG. 15 is a schematic diagram of an interface for locking and unlocking the position of a display interface in an embodiment of the present application;
FIG. 16 is a schematic diagram illustrating data interaction provided by an embodiment of the application;
fig. 17 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 18 shows a schematic structural diagram of a server provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the technical solutions in the embodiments of the present application will be described in detail below with reference to the drawings in the embodiments of the present application.
In the present application, "and/or" describes an association relationship of associated objects, which means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. And, unless stated to the contrary, the embodiments of the present application refer to the ordinal numbers "first", "second", etc., for distinguishing a plurality of objects, and do not limit the sequence, timing, priority, or importance of the plurality of objects.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Fig. 1A shows a schematic diagram of a photo-taking scene to which the embodiment of the present application is applicable, where a user 1 and a user 2 are in different spatial environments, and the user 1 and the user 2 can achieve photo-taking without moving to the same spatial environment. In order to implement user's snapshot in different spatial environments, user 1 and user 2 may install software Applications (APPs) related to snapshot on their respective corresponding electronic devices, such as: and (4) carrying out the clap operation by calling the clap related component in the APP loaded by the electronic equipment. In addition, in practical applications, the number of users in the snapshot scene is not limited, and fig. 1A is only schematically depicted.
In the snap scene shown in fig. 1A, the user 1 and the user 2 may snap photos, snap videos, snap live broadcasts, and the like, and fig. 1A illustrates the snap photos. It should be further noted that, in the snapshot scenario shown in fig. 1A, the user 1 may be regarded as a snapshot initiator, the user 2 may be regarded as a snapshot invitee, the user 1 sends a snapshot request to the user 2, and the user 2 receives the snapshot, that is, a snapshot service is established between the user 1 and the user 2. For example, the user 1 sends a request for taking a photo in close-up to the user 2 by clicking operation in the american photo, and after the user 2 receives the request for taking a photo in close-up from the user 1, the user 2 can take a photo together with the user 1 to obtain a close-up including both the user 1 and the user 2, so that both the user 1 and the user 2 are close-up users.
In the patent publication US10672167B2 it is mentioned that when a user is taking a photo, the smart device photo APP separates the photo of the photo into a foreground part (portrait part) and a background part using image recognition techniques. The invited co-lighting user sends the foreground part to the co-lighting inviting user, and the co-lighting user receives the foreground part of the co-lighting user and then synthesizes the foreground part and the background part which are separated from the local co-lighting APP according to a preset permutation and combination rule. The position adjustment of the portrait is mainly achieved by the co-photography inviting user on his own co-photography APP by moving and zooming.
The patent publication No. CN106375193A mentions establishing a service platform, which includes a user module, a storage module, a message module, and a processing module. The user module provides registration, login and friend adding services for the user; the storage module is used for storing the photos uploaded by the user; a message module user receives a message sent by a user and sends the message to the user; the processing module is used for processing the user's photo-matching request, photo synthesis and photo processing.
However, the solution disclosed in publication No. US10672167B2 has high requirements for smart devices, requires that smart devices (i.e., electronic devices) have high computing power, and cannot be used to obtain photos for some smart devices with poor data processing capability. In addition, the position adjustment of the portrait can only be realized by moving and zooming the image on the APP according to the co-shooting initiator, and the participation degree of the invited co-shooting user is not high, so that the user experience degree is not good. In addition, the proposal can only generate the photo adapted to the device parameters of the photo initiator, and can not be adapted to the device requirements of other photo users. The solution disclosed in publication No. CN106375193A is implemented by a server when generating a group photo, and although the data processing pressure of the smart device can be reduced, how the position of each group photo user is adjusted is not considered, and the user experience is poor.
Based on the above, the application provides a close-up method to improve the synthesis quality of close-up image data and improve the experience of a close-up user.
Fig. 1B shows an application scenario diagram of the method for generating a snapshot according to the embodiment of the present application, where the application scenario includes: user 1, user 2, user 3, electronic device 1 (cell phone), electronic device 2 (computer), electronic device 3 (smart watch), and server. It should be understood that fig. 1B is merely an illustration, and the application is not limited to the number and types of electronic devices. The owner of the electronic device 1 is the user 1, the owner of the electronic device 2 is the user 2, and the owner of the electronic device 3 is the user 3. The electronic device 1, the electronic device 2, and the electronic device 3 may each be installed with an APP related to the clap, such as the clap described above, or call a component related to the clap through an APP-loaded applet that has been loaded by each electronic device. The APP related to the snapshot may be a native software application of the electronic device or a software application downloaded by the user in a third party application market, and the APP is not specifically limited herein. The server may be a server from the same company or some or several servers among a plurality of servers that include their deployed services, rather than a single server.
In addition, the APP or the component related to the co-shooting includes a client and a server, the server may be set in a server in an application scenario shown in fig. 1B, the client may be set in an electronic device, the client may mount a processor of the electronic device to perform data processing, and may also send data to be processed to the server to perform data processing, or the client and the server cooperate with each other to process the data. In addition, the beat-related APP or component includes at least: the user can initiate the auction invitation and receive the auction invitation through the auction module, and the user can select which users initiate the auction invitation through the social module.
The present application is not limited to the form of the electronic device, and the electronic device may be mounted on the electronic device
Figure BDA0002990535930000081
Or other operating system. For example, the electronic device may be a mobile phone, a tablet computer, a wearable device (e.g., a watch, a bracelet, a helmet, an earphone, a necklace, etc.), an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), etc., and the embodiments of the present application do not limit the specific type of the electronic device. The electronic device includes at least: a camera, a processor and a memory. The camera is used for acquiring a self-timer image of a user; the processor is used for processing the image data of the image acquisition device; the memory may be used to store image data.
The server mentioned in this application may be a cloud server or a proprietary server, and this application is not specifically limited herein. The server may also be a plurality of electronic devices having software applications installed. The server at least comprises: the system comprises a social account service module and a data processing module. The social account module is used for authenticating the identity of the user logging in the software application and forwarding the photo-matching request of the photo-matching initiator to the client of the software application of each invited user; the data processing module is used for processing the images so as to synthesize the close-shot image data meeting the requirements of the user. It should be noted that, if the snapshot initiator initiates the request for the snapshot, the data of the snapshot image is the snapshot; if the close-shot initiator initiates a close-shot short video request, the close-shot image data is a close-shot short video; if the snapshot initiator initiates the snapshot live broadcast request, the snapshot image data is the snapshot live broadcast.
Referring to fig. 1B, in executing the solution of the present application, a user 1, a user 2, and a user 3 may log in personal information through different login entries shown in fig. 2 so as to be able to execute a clap service, where (a) in fig. 2 shows that when a clap-related APP, such as a clap, is installed on an electronic device of the user (hereinafter, the clap-related APPs are all described by taking the clap as an example), clicking the clap displays a login interface, where a "name" and a "password" entry are displayed in the login interface, the user may input a name in the login interface, and input #% in the password to log in, so as to ensure a real-time communication connection with a server; fig. 2 (b) shows that the user calls a shooting-related component through a plug-in a video software application or an image software application that the electronic device has loaded, such as a shooting applet, and inputs personal information in the shooting applet to ensure that shooting-related operations can be performed, and to maintain real-time communication with the server. Wherein the close-up related operations comprise: initiating a close shot request, accepting the close shot request, confirming the close shot and the like. In addition, the personal information registered in the login entry may be information registered by the user in the american racket or the american racket small program, and may also be system-level information such as: after the mobile phone is started and the starting password is input, the American photographing APP can be directly called without additionally inputting personal information, or the American photographing applet can be directly called when a user logs in the instant communication software application without additionally inputting personal information, and the logged account information does not need to be switched. The specific form of the login information is not particularly limited in the present application, and all the ways of logging in the personal information are applicable to the scheme of the present application.
Fig. 3 illustrates a login authentication method by taking a snapshot-related APP as an example, including:
step 301: the electronic equipment responds to the login operation of the user on the electronic equipment and sends the login information of the user to the server.
Step 302: and the server performs identity authentication on the user according to the login information of the user.
Step 303: and the server returns the identity authentication result to the electronic equipment.
Step 304: the electronic equipment determines whether the user successfully logs in according to the identity authentication result; if not, go to step 305; if yes, go to step 306.
Step 305: the electronic device prompts the user for a login failure.
Step 306: the electronic equipment keeps the login state of the user and prompts the user that the login is successful.
It should be noted that, when the login information input by the user is not matched with the login information pre-stored in the server, the user cannot normally log in the server of the associated software application, and receives a prompt message indicating that the login fails. When the login information input by the user is matched with the login information prestored in the server, the user can normally log in the server of the associated software application and receives prompt information of successful login.
It should be further noted that the login verification processes described in fig. 2 and fig. 3 are not only applicable to the electronic device of the co-shooting initiator, but also applicable to the electronic device of the co-shooting invitee, and even other electronic devices installed with the software application, that is, after the login information of the user is successfully verified, the user can both initiate a co-shooting request through the electronic device and receive the co-shooting request.
In an alternative manner, assuming that the user 1 is a snapshot initiator, a client of a snapshot in the electronic device 1 of the user 1 may prompt the user to select attribute information of a snapshot template, as shown in fig. 4, including 2 layers of snapshot templates and multiple layers of snapshot templates, and the client may also prompt the user to distinguish different snapshot templates, for example, a background image layer of the 2 layers of snapshot templates is located at the bottom, a portrait image layer is located at the top, a position of a portrait is reserved in the portrait image layer at the top, and a position of the portrait is fixed as shown in (a) in fig. 5; the multi-layer template includes a plurality of portrait layers, and the positions of the portraits in the multi-layer template are not fixed and can be changed according to the movement of the co-shooting initiator or the invited user when the user takes a photo. The present application is described only by way of example in fig. 4, but in practical application, the way in which the user selects the attribute information of the match template is not particularly limited, and for example, the user may also select the attribute information of the match template through voice input.
When the server constructs the template with 2 layers in fig. 5(a), the background layer is usually placed at the bottom, 1 portrait layer is placed at the top, servers with different portrait sizes can reserve positions of the portraits in the template for constructing the 2 layers, the positions of the portraits are usually not overlapped and are fixed in size, but the sizes of the reserved portrait positions can be adjusted according to the requirements of users in practical application. When the server constructs the multi-layer template in fig. 5(b), the background layer is usually placed at the bottom, the multiple portrait layers are placed at the top, and the positions of the portraits in the multi-portrait layer template are not fixed, that is, the positions of the pre-reserved portraits in the multiple portrait layers may be overlapped, so that the positions may be changed according to the movement of the co-shooting initiator or the co-shooting invitees when the user performs the co-shooting.
It should be noted that the portrait layer is obtained by the server after receiving the data to be photographed sent by the photographing user, performing image recognition on the data to be photographed, and separating the portrait from the background in the data to be photographed by using an image separation technology. The data to be photographed is video data, namely video stream data, of a user of the electronic device, which is extracted by the image acquisition device of the electronic device through human image recognition. It should be further noted that, in order to meet the requirements of the image capturing devices of the different resolutions of the co-shooting users, the portrait layers in the co-shooting template are synthesized according to the highest resolution allowed by the image capturing devices of the co-shooting users, and after the co-shooting data is confirmed, the portrait layers are pushed to the respective co-shooting users, so that the requirements of the device parameters of the electronic devices of the different co-shooting users can be met, the co-shooting data with the reduced resolution can be pushed, or the co-shooting data with the improved resolution can be pushed.
If the co-shooting initiator selects the self-shooting background, the server can separate the portrait from the background to obtain a background layer in the co-shooting template after receiving the data to be co-shot sent by the co-shooting initiator. If the co-shooting initiator selects the background of the invited user, the server can separate the portrait from the background to obtain a background layer in the co-shooting template after receiving the data to be co-shot sent by the invited user. The background layer is a background which changes in real time and dynamically changes along with the position change of a co-shooting user (a co-shooting initiator or an invited user) or the movement of an electronic device of the co-shooting user, and is not a static background picture obtained through matting.
After the snapshot initiator selects the attribute information of the snapshot template, the interface of fig. 6A is skipped, and a plurality of different snapshot templates are displayed on the interface of the electronic device 1 of the snapshot initiator. Fig. 6A only illustrates 2-person clap templates with 2 clap users, 3-person clap templates with 3 clap users, and 5-person clap templates with 5 clap users, where the number of clap templates is not limited in the present application, and fig. 6A is only schematically described. User 1 can select suitable close-shooting template on the close-shooting template selection interface that electronic equipment 1 shows, specifically, close-shooting initiator's accessible slides electronic equipment's display screen and browses optional close-shooting template, and to different electronic equipment, close-shooting initiator can adopt different screen sliding modes to browse close-shooting template, and this application is not specifically limited here, if: the close-up template can be browsed by sliding the screen up and down or left and right aiming at the mobile phone.
It should be noted that the close-shot initiator selects the close-shot template before close-shooting, and after creating the close-shot service with the close-shot invitee, the selected close-shot template may also be changed, and the set close-shot initiator may also authorize other users to change the close-shot template. For example, the user 1 selects the 1-person auction template and sends an auction request to the user 2, when the users 1 and 2 execute the auction service, the user 3 sends a message to the user 1 to want to join in the auction, and if the auction data of the users 1 and 2 is not constructed, the user 1 can directly reselect the 3-person auction template; it is also possible to send request information to user 2: whether the user 3 is approved to join the auction or not, and in the case that the user 2 is approved to join the user 3, the 3-person auction template is reselected.
In addition, when the co-shooting initiator selects the co-shooting template, the co-shooting template selection interface may further include a co-shooting background selection, referring to the interface schematic diagram shown in fig. 6B, the co-shooting initiator may select the self-shooting background and may also select a background of an invited user as the co-shooting background, and specifically how to select the co-shooting background may be determined according to actual needs of the user, which is not specifically limited herein.
In addition, the auction initiator may be fully authorized to one or other users of the auction invitees, by authorizing the users to perform the operation of the re-selection of the auction template and the operation of the selection of the auction context, etc. instead of the auction initiator.
Fig. 7 illustrates a process of determining a snap template by taking APP related to snap as an example, including:
step 701: the electronic device responds to a snapshot template selection operation of a snapshot initiator on the electronic device (namely, the first electronic device), and sends a selected snapshot template construction request to the server.
It should be noted that the operation of selecting the close-shot template includes: the number of the users who take a photo in the same time is selected, and the background of taking a photo in the same time is selected, and the number of the users who take a photo in the same time is selected, that is, the number of the users who take a photo in the same time is selected as the template of taking a photo in which 2 users take a photo in the same time as shown in fig. 6A, and the number of the users who take a photo in the same time is selected as 2, and the number of the users who take a photo in the same time as 3 users is selected as 3. The photographic background is shown in fig. 6B, and includes a self-timer background of the photographic initiator and a self-timer background of the photographic invitees.
Step 702: and the server generates a close-shot template according to the close-shot template construction request.
Step 703: and the server returns the construction result of the snap template to the first electronic equipment.
It should be noted that, after receiving a snapshot template construction request sent by the electronic device, the server determines attribute information of a snapshot template selected by a user, and constructs a snapshot template meeting the requirement of the attribute information of the snapshot template, for example, construct a 2-layer snapshot template for 2 people and construct a multi-layer snapshot template for 3 people.
Step 704: the first electronic equipment determines whether the construction of the close-up template is successful according to the construction result of the close-up template; if not, go to step 705; if yes, go to step 706.
Step 705: the first electronic equipment prompts that the construction of the close-shot template fails.
Step 706: the first electronic equipment prompts the snapshot template to be successfully constructed and prompts a friend list to be selected by a snapshot initiator.
It should be noted that, when the first electronic device sends the request for creating the snapshot template to the server, a timer may be set, and if the timer expires and the snapshot template creation result fed back by the server is not received yet, the first electronic device may prompt the snapshot initiator that the snapshot template creation fails. In addition, the construction of the snapshot template of the server may fail due to other reasons, which are not detailed herein.
In an optional manner, as shown in fig. 8, the buddy list may be hidden in the operation interface, after the auction initiator clicks the invite button, the buddy list (which may be a user installed with american beats) may be displayed to the auction initiator, the buddy list may be hidden in the buddy invitation interface, the buddy list on the right side of fig. 8 may be automatically displayed after the auction initiator clicks the invite operation item, the top buddies on the buddy list are online (self, millet, and xiaoming are online in fig. 8), and the buddies shown by hatching below the buddy list are offline buddies (for example, 123 in fig. 8 are offline). The auction initiator may preferentially select the online friend to initiate the auction request, and may also send the auction request to the offline friend, where the auction request may prompt the invited user in the form of short message, phone call, or window message prompt, as shown in fig. 9, a prompt appears on a screen of the second electronic device of the invited user: invite to beat with you, please confirm as soon as possible. After receiving the invitation, the invited user logs in the beautiful shooting and triggers an instruction for receiving the close shooting request (triggered by voice, clicking a button for confirming the acceptance of the close shooting request and the like). After the second electronic device of the invited user is triggered to acquire the command of accepting the close shot, the image acquisition device of the second electronic device is automatically started, and the data to be close shot (namely the video stream data acquired by the camera of the second electronic device) is sent to the server.
Fig. 10 illustrates an execution flow of the auction method by taking the invitation of the auction initiator to have 1 friend as an example, but in actual application, the number of the auction invitees is not limited, in the figure, the electronic device of the auction initiator is the first electronic device, the electronic device of the auction invitee is the second electronic device, and in actual application, the number of the auction invitees is not limited, and the specific execution includes:
step 1001: the first electronic equipment responds to friend selection operation of a auction initiator in a friend list and sends a auction request to a server.
Step 1002 a: the server sends the clap request to the second electronic equipment.
It should be noted that the snapshot request may carry snapshot template information so that the location information that is available for adjustment to the snapshot invitees may be presented. The auction request can also carry identification information of the first electronic device and identification information of the auction initiator. The server can learn which electronic device the close-shot request is sent through according to the identification information of the electronic device of the close-shot initiator. The server can learn which user sent the auction request according to the identification information of the auction initiator.
Step 1002 b: the server receives response information from the second electronic device.
It should be noted that, if the close-up invitee accepts the close-up request, the second electronic device starts the image capture device (camera) to send the data to be close-up to the server, that is, step 1005 is executed.
Step 1002 c: and the server returns the response information to the first electronic equipment.
Step 1003: the first electronic equipment determines whether the co-shooting invitee receives the co-shooting request or not according to the response information; if not, go to step 1004; if yes, go to step 1005.
Step 1004: the first electronic equipment prompts the auction initiator that the invitation fails.
Step 1005 a: the first electronic equipment prompts the auction initiator to invite successfully, and uploads the data to be subjected to auction to the server.
Step 1005 b: and the second electronic equipment uploads the data to be photographed to the server.
The execution sequence of step 1005a and step 1005b is not limited to be sequential, and may be executed simultaneously, or step 1005b may be executed first and then step 1005a may be executed. It should be further noted that the data to be photographed may be a self-portrait preview stream, or may be other forms of image data such as: the present application takes a self-timer preview stream as an example, where the self-timer preview stream may be video stream data acquired by an image acquisition device of an electronic device in real time, or may also be video stream data pre-recorded by the image acquisition device of the electronic device, and the present application is not limited specifically herein.
Step 1006 a: the server generates first close-shot data according to the data to be close-shot of the close-shot initiator and the data to be close-shot of the close-shot invitees.
It should be noted that, if the data to be photographed is a self-photographing preview stream, the first photographing data is a photographing preview stream, and the photographing preview stream is a video stream including a photographing initiator and a photographing invitee. During specific execution, the server can perform layer separation on the data to be subjected to the co-shooting of the co-shooting initiator and the data to be subjected to the co-shooting of the invitees, and the data are divided into a portrait layer and a background layer; and combining the separated portrait layer and background layer with the close-up template to obtain first close-up data.
Step 1006 b: and the server returns the first snapshot data to the second electronic equipment.
Step 1006 c: the server returns the first snapshot data to the first electronic device.
Step 1007 a: the first electronic equipment receives first shooting data from the server and displays the first shooting data; the first photo-matching data comprises a background image layer, a portrait image layer from the first electronic device and a portrait image layer from the second electronic device.
Step 1007 b: the second electronic equipment receives the first shooting data from the server and displays the first shooting data.
Step 1008: and responding to the position adjustment operation of the first electronic equipment to at least 1 portrait layer to obtain position adjustment information.
Step 1009: and sending the position adjusting information to a server.
Step 1010: and the server receives the position adjustment information and adjusts the first close-shot data according to the position adjustment information to obtain second close-shot data.
It should be noted that fig. 10 only schematically illustrates that the co-shooting initiator adjusts the position of the portrait layer, but in practical applications, the co-shooting invitee may also adjust the portrait layer, and the server may adjust the photo data by comprehensively considering the position adjustment operations of the co-shooting initiator and the co-shooting invitee. In addition, in actual application, the server may adjust the photo-taking data in real time, for example, the photo-taking initiator adjusts the position of the portrait layer of the photo-taking initiator, the server adjusts the photo-taking data in real time, the photo-taking invitees adjust the position of the portrait layer, and the server also adjusts the photo-taking data in real time. When the position is adjusted, the server may generate a virtual adjustment position to be displayed to the snapshot user, after the snapshot user clicks the confirmation, the server adjusts the position of the portrait layer according to the adjustment position confirmed by the snapshot user, for example, the snapshot initiator adjusts the position of the portrait layer, the server generates a virtual adjusted snapshot data (that is, the snapshot data obtained by adjusting the simulated snapshot data according to the position adjustment operation of the snapshot initiator) to be displayed to the snapshot initiator, and after the snapshot initiator clicks the confirmation button, the first snapshot data is adjusted according to the adjustment position confirmed by the snapshot initiator.
It should be noted that, because the first snapshot data is equivalent to a dynamic video image, the electronic device of the snapshot initiator or the snapshot invitee may display the snapshot data with coordinate information as shown in fig. 11, the lower left corner of the portrait layer of the snapshot initiator, that is, the origin (0, 0, 0), may be used as a reference point for position adjustment operation, and in actual application, the reference point position may also be set in the upper left corner, the upper right corner, and the upper right corner of the snapshot initiator, or the snapshot initiator selects the snapshot model to define the reference point coordinates, and the snapshot user may perform position adjustment of the portrait layer with respect to the reference point coordinate position.
In addition, when the user in time adjusts the position of the portrait, for example: when the user moves left and right or remains stationary, the corresponding electronic device records the offset (X, Y, Z) relative to the reference point. Generally speaking, the user does not need to adjust the coordinate position of the Z axis, and the user can zoom in and out the portrait by moving forward and backward relative to the camera unless the user needs to realize the zoom-in and zoom-out function of the portrait. The distance sensor on the electronic equipment can record the front and back position change of the user and reflect the front and back position change on a preview interface of the electronic equipment in real time through relative distance calculation.
The position of the user in the snapshot can be adjusted by moving the position according to the coordinate information, and the position of the user in the snapshot can also be adjusted by moving the position of the electronic device of the user in the snapshot. Wherein the position movement comprises: a back and forth translation, a left and right translation, an up and down translation, and a rotational movement, and the present application is not particularly limited herein. In specific implementation, the following 3 adjustment modes may be included:
in the method 1, the position of the snapshot initiator in the first snapshot data is adjusted by the snapshot initiator.
The position of the snapshot initiator in the snapshot data is adjusted by the snapshot initiator through the position movement, for example, the portrait of the snapshot initiator in the snapshot data moved forward by the snapshot initiator becomes larger, and the portrait of the snapshot initiator moved backward by the snapshot initiator becomes smaller. The close-up initiator moves the portrait of the close-up initiator to the left in the close-up data, and the close-up initiator moves the portrait of the close-up initiator to the left backward. The position of the close-up initiator in the close-up data can be adjusted by moving the position of the electronic device of the close-up initiator, for example, the close-up initiator moves the electronic device forward to increase the portrait of the close-up initiator in the close-up data, and the close-up initiator moves the electronic device backward to decrease the portrait of the close-up initiator.
Mode 2, the snap invitee adjusts the position of the snap invitee in the first snap data.
The position of the co-shooting invitee in the co-shooting data is adjusted by the co-shooting invitee through position movement, for example, the co-shooting invitee rotates the portrait of the co-shooting invitee right-handed in the first co-shooting data, and the co-shooting invitee rotates the portrait of the co-shooting invitee left-handed in the first co-shooting data. The method includes the steps that a co-shooting invitee adjusts the position of the co-shooting invitee in the co-shooting data by moving the position of the electronic device of the co-shooting invitee, for example, the co-shooting invitee moves the portrait of the co-shooting invitee in the co-shooting data downwards when the electronic device of the co-shooting invitee moves upwards, the portrait of the co-shooting invitee when the electronic device of the co-shooting invitee moves backwards is reduced, and the situation that the position of the electronic device of the co-shooting invitee moves is not described one by one in the application.
Mode 3, the auction initiator adjusts the position of the auction invitee in the first auction data.
The position adjusting item is used for adjusting the position of the close-shot invitee in the first close-shot data by the close-shot initiator, and comprises the following steps: zoom scale, front and back positions, left and right positions, and shielded parts. For example, if the snapshot initiator selects the zoom scale, then the scale of the portrait of the snapshot invitee in the first snapshot data is zoomed. If the close-up initiator selects the front-back position, the front-back position of the portrait of the invitee in the first close-up data is adjusted as follows: the first co-shooting data comprises 2 co-shooting users, wherein the portrait of the co-shooting initiator is behind the portrait of the co-shooting invitee in front, and after the co-shooting initiator selects the front-back position adjustment item, the portrait of the co-shooting initiator is in front of the portrait of the post co-shooting invitee. Under the precondition that the close-shot template is multilayer, the close-shot initiator can also select an adjusting item of an occlusion part, for example, the right shoulder is occluded, so that the right shoulder of the close-shot invitee in the close-shot data is occluded and displayed.
Step 1011 a: and the server returns the second snapshot data to the first electronic equipment.
Step 1011 b: and the server returns the second beat data to the second electronic equipment.
Step 1012 a: and the first electronic equipment receives the second close-shot data and displays the second close-shot data.
Step 1012 b: and the second electronic equipment receives the second close-shot data and displays the second close-shot data.
It should be noted that, when the data to be photographed or the first photographing data is transmitted between the first electronic device or the second electronic device and the server, the image data with low resolution may be transmitted, and after confirmation of the photographing, the image data with high resolution may be displayed. In this way, the consumption of the flow in the clap service can be reduced.
In an alternative mode, after responding to a close-shooting confirmation operation triggered by a close-shooting initiator, the first electronic device sends a close-shooting confirmation request to the server, and the server generates close-shooting confirmation data including the close-shooting initiator and a close-shooting invitee according to the close-shooting confirmation request.
Wherein, the close-time confirmation operation can be triggered by one or more of the following modes: the close-up initiator clicks the close-up confirmation key, the close-up initiator and the close-up invitee maintain a preset expression (such as a smiling expression), and the close-up initiator and the close-up invitee continuously keep moving for a preset time (such as keeping still for 5 seconds after putting the motion). The application does not specifically limit the way of the snapshot confirmation, and any way that can trigger the snapshot confirmation request is applicable to the application, for example: and shooting the user in time and shouting the eggplant and the like.
In addition, fig. 12 shows a snap preview flow displayed on a preview interface of the electronic device of the snap initiator, and if the snap initiator selects the self-shooting background as the snap background when selecting the 3-person snap template, the snap preview flow with the self-shooting background as the snap background of the snap initiator is displayed on the preview interface of the client, as shown in fig. (a); if the co-shooting initiator selects the background of the co-shooting invitee as the co-shooting background when the 3-person co-shooting template is selected, a co-shooting preview stream of which the co-shooting background is the self-shooting background of the friends 1 of the co-shooting invitee is displayed in a preview interface of the electronic equipment, the co-shooting initiator can also select background switching, and the background in the co-shooting preview stream is switched to the background of the friends 2 of the co-shooting invitee. In addition, when the co-shooting initiator selects the co-shooting template and the co-shooting background is not selected, the preview interface of the electronic device of the co-shooting initiator can be further displayed as shown in fig. 13, and co-shooting preview streams of different backgrounds are displayed, including the self-shooting background of the co-shooting initiator and the preview streams of the backgrounds of the co-shooting invitees, and only the co-shooting preview streams of the backgrounds of 2 co-shooting invitees are illustrated in fig. 13.
In addition, it is also noted that the server may adjust the imaging ratio of the target object in the snap preview stream according to the screen display information of the electronic device of each snap user. As shown in fig. 14, the co-shooting initiator is user 1, the co-shooting invitee is user 2, the avatar of user 1 in the self-shooting preview stream sent by user 1 to the server occupies most of the space of the self-shooting preview stream, and the avatar of user 2 also occupies most of the space of the self-shooting preview stream, at this time, the server may adaptively adjust the size of the avatar in the self-shooting preview streams of user 1 and user 2 to synthesize a complete co-shooting preview stream that can be displayed on the electronic devices of user 1 and user 2.
In addition, when performing the position adjustment, the co-shooting user may click a confirmation operation on the display interface of the client after determining the position, as shown in (a) in fig. 15, to lock the position of the co-shooting user in the co-shooting preview stream, and the co-shooting user may move the position arbitrarily, for example: the live broadcast of taking a match is requested for taking a match, and the position of the user of taking a match is not good, needs to move forward, but the back that moves before, the user of taking a match feels the health uncomfortable, consequently can be after moving forward, the position after the locking adjustment, the user of taking a match oneself keeps the position that oneself is comfortable live broadcast, can not influence live broadcast's effect equally.
Further, after the user selects the operation of the position lock, the user may not want to hold the position lock, and the user may exit the position lock by clicking the unlock position in fig. 15 (b).
It should be further noted that, when the electronic device of the snap user sends the self-timer preview stream to the server, the device parameters of the electronic device of the snap user may be sent to the server, so that the server may blend the snap into snap data adapted to the device parameters of the electronic devices of the snap users. Wherein the device parameters include: the image resolution of an image acquisition device of the electronic equipment, the screen size of the electronic equipment, and the like. For example: the user of taking a match includes an initiator A of taking a match and an invitee B of taking a match, the screen size of the electronic equipment of A is 6 inches, the screen size of the electronic equipment of B is 2 inches, the server generates 6 inches of data for confirming the match, and pushes the data to the electronic equipment of A for displaying, and generates 2 inches of data for confirming the match, and pushes the data to the electronic equipment of B for displaying.
Fig. 16 is a schematic diagram illustrating data interaction between electronic devices of a co-shooting user according to an embodiment of the present application, where the electronic device of the co-shooting user sends a self-shooting preview stream to a server, and the server generates the co-shooting preview stream and then pushes the co-shooting preview stream to a client of the electronic device of each co-shooting user for display, where fig. 16 illustrates a data transmission process between a co-shooting initiator and 2 co-shooting invitees, but when the data transmission process is actually applied, the number of the co-shooting invitees is not limited.
According to the scheme for generating the close-up photo, the user selects the close-up photo template during close-up photo, the server constructs the close-up photo template according to the selection operation of the close-up photo template, after the self-shooting preview stream is sent to the server by the electronic equipment of the close-up user, the server performs image layer separation or portrait keying processing on the self-shooting preview stream, and the close-up photo matched with the close-up photo model is synthesized. In addition, a preview stream of a standard or specification supported by the electronic equipment is constructed according to the equipment parameters of the co-shooting user and pushed to the electronic equipment of the co-shooting user for display. By the method, users scattered in the geographic position can utilize the electronic equipment to realize real-time remote group photography, and the photographing experience of the users is improved. In addition, the invited user can make its own position adjustment according to the positions of other co-users in the co-photograph preview stream. When the server synthesizes the photos, the photos with different specifications can be generated according to the equipment parameters of the photo users, and the equipment preview and viewing requirements of different photo users are met. In addition, the co-shooting initiator can select the self-shooting background as the co-shooting background and can also select the backgrounds of other co-shooting users as the co-shooting background, and the mode improves the co-shooting experience of the users.
Note that the structure of the electronic apparatus may be as shown in fig. 17. Electronic device 100 includes sensor module 101, communication module 102, power module 103, processor 104, memory 105, and display 106, it being understood that the components shown in fig. 17 do not constitute a specific limitation of electronic device 100, and that electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components.
In some embodiments, the electronic device 100 may include a wireless communication module and/or a mobile communication module, and one or more antennas. The electronic device 100 may implement a communication function through one or more antennas, a wireless communication module, or a mobile communication module. In some examples, the mobile communication module may provide a solution for applications on the electronic device 100 that includes wireless communication, such as 2G/3G/4G/5G. The wireless communication module may provide solutions for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. One or more antennas may be used to transmit and receive electromagnetic wave signals.
The electronic device 100 may further include a power supply module 103, such as a battery, for supplying power to various components in the electronic device 100, such as the processor 104, the positioning module 102, and the like. In other embodiments, the electronic device 100 may further be connected to a charging device (for example, through a wireless or wired connection), and the power supply module may receive electric energy input by the charging device to store the electric energy in the battery.
Among other things, the processor 104 may include one or more processing units, such as: the processor 104 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors. The processor 104 may be, among other things, the neural center and the command center of the terminal device 100. The processor 104 may generate operation control signals according to the instruction operation code and the timing signals, and perform instruction fetching and execution control. In other embodiments, a memory may also be provided in processor 104 for storing instructions and data. In some embodiments, the memory in the processor 104 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 104. If the processor 104 needs to reuse the instruction or data, it can be called directly from the memory, avoiding repeated accesses, reducing the latency of the processor 104 and thus increasing the efficiency of the system. The processor 104 may run the software code/module of the target object position adjustment method provided in some embodiments of the present application to determine the quality of outdoor activity of the user.
Memory 105 may be used to store computer-executable program code, including instructions. The processor 104 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the memory. The memory may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like, which is not limited in the embodiments of the present application.
In some embodiments, electronic device 100 may also include a display 106 (or display screen), such as a display when electronic device 100 is a bracelet or a display when electronic device 100 is a watch. And the display can be used for displaying the outdoor photo preview stream, the display interface of the photo image data or other applications and the like. The display includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, a touch sensor may be disposed in the display to form a touch screen, which is not limited in this application. The touch sensor is used to detect a touch operation applied thereto or nearby. The touch sensor may communicate the detected touch operation to the processor 103 to determine the touch event type. Visual output associated with the touch operation may be provided via the display.
In some embodiments, the mobile communications module may be coupled with one or more antennas. For example, the mobile communication module may receive electromagnetic waves from one or more antennas, filter, amplify, etc. the received electromagnetic waves to obtain electrical signals, and transmit the electrical signals to the processor 103 for processing (e.g., the processor 104 determines whether to provide corresponding outputs in response to the electrical signals). The mobile communication module may also amplify signals processed by the processor 104 and convert the signals into electromagnetic waves for radiation via one or more antennas. In other embodiments, the wireless communication module may also be coupled with one or more antennas. For example, the wireless communication module may receive electromagnetic waves from one or more antennas, filter, amplify, etc. the received electromagnetic waves, and transmit the filtered electromagnetic waves to the processor 104 for processing. The wireless communication module may also amplify signals processed by the processor 104 and convert the signals to electromagnetic waves for radiation via one or more antennas. When actually executed, the first electronic device may perform the following steps:
step S1: the first electronic equipment and the second electronic equipment establish a clap service.
Step S2: the method comprises the steps that a first electronic device uploads data to be photographed to a server; the data to be synchronized is video stream data of the first electronic device.
Step S3: receiving first shooting data from a server, and displaying the first shooting data; the first photographing data comprises a background image layer, a portrait image layer from the first electronic device and a portrait image layer from the second electronic device.
Step S4: the position adjustment information is obtained in response to a position adjustment operation for the human figure layer.
Step S5: and sending the position adjustment information to a server.
Step S6: and receiving second shooting data generated by the server according to the position adjustment information, and displaying the second shooting data.
It should be noted that in the application, the position of the portrait graphic layer can be visually embodied in the second close-up data after being adjusted, a reserved position is not required during close-up, and the problem that the later-stage embedding is difficult due to the fact that the position is not reserved during close-up is avoided. The method for the close-up photographing is more flexible and better in user interaction experience.
In one possible design, the portrait layer includes: the portrait layer of the co-shooting initiator and the portrait layer of the co-shooting invitees; the first electronic equipment belongs to a close shot initiator; the second electronic device belongs to a close-up invitee; the first electronic equipment can respond to the position movement of the snapshot initiator or the movement of the first electronic equipment to trigger the first electronic equipment to adjust the position of the portrait layer of the snapshot initiator, and position adjustment information is obtained; or triggering the first electronic device to adjust the position of the portrait layer of the co-shooting invitee in response to the position movement of the co-shooting invitee or the movement of the second electronic device, and obtaining position adjustment information.
It should be noted that, when adjusting the position, the position can be adjusted by moving the position of the electronic device, and the position can also be adjusted by moving the corresponding electronic device, for example: the user 1 who participates in the auction holds the electronic device as a mobile phone, and when the user adjusts the position in the auction, the user can move the position of the user and can also move the position of the mobile phone. The position is adjusted in the above mode, so that the flexibility of the adjustment mode can be improved, and the requirements of users can be better met.
In one possible design, the position adjustment information includes: scaling, front and back positions, left and right positions, and a shielding part.
In one possible design, the first coincidence data may be determined by: responding to friend selection operation of a snap initiator in a friend list, sending a snap request to a server, and returning response information after instructing the server to send the snap request to second electronic equipment; and if the situation that the co-shooting invitees accept the co-shooting request is determined according to the response information, uploading the data to be co-shot of the co-shooting initiator to a server, and indicating the server to generate first co-shooting data according to the data to be co-shot of the co-shooting initiator and the data to be co-shot of the co-shooting invitees.
In one possible design, the clap request includes one or more of the following: taking photos, taking videos and taking live broadcasts.
It should be noted that the close-up request of the application is not only suitable for close-up photos, but also suitable for close-up videos, close-up live broadcasts and the like, and the application range of the application is wider, the requirements of users can be met, and the experience degree of the users can be improved.
In one possible design, the first electronic device may respond to a snapshot template selection operation of a snapshot initiator, send a snapshot template construction request to the server, instruct the server to generate a snapshot template according to the snapshot template construction request, and return a construction result of the snapshot template; and if the construction of the snap template is determined to be successful according to the construction result of the snap template, prompting a friend list for a snap initiator to select.
In one possible design, the close-up template selection operation includes: selecting the number of the users in time and selecting the background in time; the close-up background comprises: a self-portrait context of the coaptation initiator and a self-portrait context of the coaptation invitees.
In one possible design, after receiving the second snapshot data from the server, the server sends a snapshot confirmation request after responding to a snapshot confirmation operation triggered by the snapshot initiator.
In one possible design, the beat-to-beat confirmation operation is triggered by one or more of the following: and clicking a close-shot confirmation key by the close-shot initiator, maintaining the preset expression of the close-shot initiator and the close-shot invitee, and keeping the close-shot initiator and the close-shot invitee for action for a preset time.
Fig. 18 is a schematic structural diagram illustrating a server provided in an embodiment of the present application, and as shown in fig. 18, the server 60 includes a communication interface 601, a processor 602, and a memory 603. Further, the server 60 may further include a bus system, wherein the processor 602, the memory 603, and the communication interface 601 may be connected via the bus system.
The processor 602 may be a chip. For example, the processor 602 may be a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a system on chip (SoC), a Central Processing Unit (CPU), a Network Processor (NP), a digital signal processing circuit (DSP), a Microcontroller (MCU), a Programmable Logic Device (PLD), or other integrated chips.
It should be noted that the processor 602 in the embodiment of the present application may be an integrated circuit chip having signal processing capability. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor described above may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The memory 603 may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The communication interface 601 may be used to input and/or output information. In an alternative embodiment, when the server includes a transceiver, the method steps performed by the communication interface 601 may also be performed by the transceiver.
The memory 603 is used for storing computer-executable instructions for executing the embodiments of the present application, and is controlled by the processor 602 to execute the instructions. The processor 602 is configured to execute computer-executable instructions stored in the memory 603, thereby implementing the audio generation method provided in the embodiments of the present application. Alternatively, in this embodiment of the application, the processor 602 may also execute a function related to processing in the method of the data processing method provided in the following embodiments of the application, and the communication interface 601 is responsible for communicating with other devices or a communication network, which is not specifically limited in this embodiment of the application. Optionally, the computer-executable instructions in the embodiment of the present application may also be referred to as application program codes, which is not specifically limited in the embodiment of the present application. The server may perform the following steps:
step C1: respectively receiving data to be clapped from first electronic equipment and data to be clapped from second electronic equipment; the first electronic equipment and the second electronic equipment establish a clap service; the first electronic equipment and the second electronic equipment comprise at least one camera; the data to be synchronized of the first electronic equipment is video stream data of the first electronic equipment; the data to be photographed of the second electronic device is video stream data of the second electronic device.
Step C2: synthesizing data to be close-photographed of the first electronic device and the second electronic device into first close-photographing data; the first photo-matching data comprises a background image layer, a portrait image layer from the first electronic device and a portrait image layer from the second electronic device; and sending the first snapshot data to the first electronic device and the second electronic device.
Step C3: receiving position adjusting information, and adjusting the first close-shot data according to the position adjusting information to obtain second close-shot data; the position adjustment information is determined in response to a position adjustment operation for the portrait layer.
Step C4: and sending the second snapshot data to the first electronic device and the second electronic device.
It should be noted that in the application, the position of the portrait graphic layer can be visually embodied in the second close-up data after being adjusted, a reserved position is not required during close-up, and the problem that the later-stage embedding is difficult due to the fact that the position is not reserved during close-up is avoided. The method for the snapshot is more flexible and better in user interaction experience.
In one possible design, the server may separate the portrait layer from the background layer in the first snapshot data; and adjusting the position of the portrait image layer according to the position adjustment information to obtain second close-up data.
In one possible design, the portrait layer includes: the portrait layer of the co-shooting initiator and the portrait layer of the co-shooting invitee; the first electronic equipment belongs to a close shot initiator; the second electronic device belongs to a close-up invitee; the server can acquire the equipment parameters of the first electronic equipment and the equipment parameters of the second electronic equipment; and adjusting the second close-shot data into image data matched with the equipment parameters of the first electronic equipment, sending the image data to the first electronic equipment, adjusting the second close-shot data into image data matched with the equipment parameters of the second electronic equipment, and sending the image data to the second electronic equipment.
In one possible design, the server may send the close-up request to the second electronic device after receiving the close-up request from the first electronic device; the auction request is triggered by friend selection operation of the auction initiator in the friend list; and receiving response information from the second electronic equipment, and sending the response information to the first electronic equipment.
In one possible design, the clap request includes one or more of the following: taking photos, taking videos and taking live broadcasts.
In one possible design, the server may receive a snap template construction request from the first electronic device, and determine a construction result of the snap template; the method comprises the steps that a close-shot template construction request is triggered by first electronic equipment responding to a close-shot template selection operation of a close-shot initiator; and returning the construction result of the snap template to the first electronic equipment.
In one possible design, the close-up template selection operation includes: selecting the number of users in a close shot and selecting a background in a close shot; the close-up background comprises: a self-portrait context of the coaptation initiator and a self-portrait context of the coaptation invitees.
In one possible design, the server may receive a close-up confirmation request from the first electronic device, and generate close-up confirmation data; the close-up confirmation request is triggered by the first electronic device responding to the close-up confirmation operation of the close-up initiator after the first electronic device receives the second close-up data from the server.
In one possible design, the beat-to-beat confirmation operation is triggered by one or more of the following: and clicking a close-shot confirmation key by the close-shot initiator, maintaining the preset expression of the close-shot initiator and the close-shot invitee, and keeping the close-shot initiator and the close-shot invitee for action for a preset time.
Based on the foregoing embodiments, the present application further provides a readable storage medium, which stores instructions that, when executed, cause the method performed by the security detection apparatus in any of the foregoing embodiments to be implemented. The readable storage medium may include: u disk, removable hard disk, read only memory, random access memory, magnetic or optical disk, etc. for storing program codes.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

Claims (22)

1. A method for taking photos in time is applied to first electronic equipment, wherein the first electronic equipment comprises at least one camera; it is characterized by comprising:
the first electronic equipment and the second electronic equipment establish a snapshot service;
the first electronic equipment uploads data to be photographed to a server; the data to be photographed is video stream data of the first electronic equipment;
receiving first shooting data from the server, and displaying the first shooting data; the first photo-matching data comprises a background image layer, a portrait image layer from the first electronic device and a portrait image layer from the second electronic device;
responding to the position adjustment operation of the portrait image layer to obtain position adjustment information;
sending the position adjustment information to the server;
and receiving second close-shot data generated by the server according to the position adjusting information, and displaying the second close-shot data.
2. The method of claim 1, wherein the portrait layer comprises: the portrait layer of the co-shooting initiator and the portrait layer of the co-shooting invitee; the first electronic equipment belongs to the close-up initiator; the second electronic device belongs to a close-up invitee; the obtaining of the position adjustment information in response to the position adjustment operation on the portrait image layer includes:
triggering the first electronic device to adjust the position of the portrait layer of the close-up initiator in response to the position movement of the close-up initiator or the movement of the first electronic device, so as to obtain position adjustment information; or the like, or, alternatively,
and triggering the first electronic equipment to adjust the position of the portrait layer of the co-shooting invitee in response to the position movement of the co-shooting invitee or the movement of the second electronic equipment, so as to obtain position adjustment information.
3. The method according to claim 1 or 2, wherein the position adjustment information comprises: zoom scale, front and back positions, left and right positions, and shielded parts.
4. A method according to claim 2 or 3, wherein the first coincidence data is determined by:
responding to friend selection operation of the auction initiator in a friend list, sending a auction request to the server, and returning response information after instructing the server to send the auction request to the second electronic equipment;
if the situation that the co-shooting invitee accepts the co-shooting request is determined according to the response information, uploading the data to be co-shot of the co-shooting initiator to a server, and instructing the server to generate first co-shooting data according to the data to be co-shot of the co-shooting initiator and the data to be co-shot of the co-shooting invitee.
5. The method of claim 4, wherein the clap request comprises one or more of: taking photos, taking videos and taking live broadcasts.
6. The method according to claim 4 or 5, wherein before sending a auction request to the server in response to the friend selection operation of the auction initiator in the friend list, the method further comprises:
responding to the selection operation of the close-up template of the close-up initiator, sending a close-up template construction request to the server, instructing the server to generate a close-up template according to the close-up template construction request, and returning the construction result of the close-up template;
and if the construction of the auction template is determined to be successful according to the construction result of the auction template, prompting a friend list for the selection of the auction initiator.
7. The method of claim 6, wherein the close-up template selection operation comprises: selecting the number of users in a close shot and selecting a background in a close shot; the close-up background comprises: a self-portrait context of the auction originator and a self-portrait context of the auction invitee.
8. The method according to any one of claims 2-7, further comprising:
and after receiving the second close-shot data from the server, responding to a close-shot confirmation operation triggered by the close-shot initiator, and sending a close-shot confirmation request to the server.
9. The method of claim 8, wherein the beat confirmation operation is triggered by one or more of:
the method comprises the following steps that a close-shot initiator clicks a close-shot confirmation key, the close-shot initiator and the close-shot invitee maintain preset expressions, and the close-shot initiator and the close-shot invitee continuously keep action for preset time.
10. A method for clapping is applied to a server and is characterized by comprising the following steps:
respectively receiving data to be clapped from first electronic equipment and data to be clapped from second electronic equipment; the first electronic equipment and the second electronic equipment establish a close-shooting service; the first electronic device and the second electronic device each comprise at least one camera; the data to be photographed of the first electronic equipment is video stream data of the first electronic equipment; the data to be photographed of the second electronic equipment is video stream data of the second electronic equipment;
synthesizing the data to be close-photographed of the first electronic equipment and the second electronic equipment into first close-photographing data; the first photo-matching data comprises a background image layer, a portrait image layer from the first electronic device and a portrait image layer from the second electronic device;
sending the first snapshot data to the first electronic device and the second electronic device;
receiving position adjusting information, and adjusting the first close-shot data according to the position adjusting information to obtain second close-shot data; the position adjustment information is determined in response to a position adjustment operation on the portrait image layer;
and sending the second snapshot data to the first electronic device and the second electronic device.
11. The method according to claim 10, wherein the adjusting the first close-up data according to the position adjustment information to obtain second close-up data comprises:
separating a portrait layer from a background layer in the first snap-shot data;
and adjusting the position of the portrait image layer according to the position adjustment information to acquire the second photo-matching data.
12. A method according to claim 10 or 11, characterized in that the portrait layer comprises: the portrait layer of the co-shooting initiator and the portrait layer of the co-shooting invitees; the first electronic equipment belongs to the close-up initiator; the second electronic device belongs to a close-up invitee; further comprising:
acquiring the equipment parameters of the first electronic equipment and the equipment parameters of the second electronic equipment;
and adjusting the second close-shot data into image data matched with the equipment parameters of the first electronic equipment, sending the image data to the first electronic equipment, adjusting the second close-shot data into image data matched with the equipment parameters of the second electronic equipment, and sending the image data to the second electronic equipment.
13. The method of any of claims 12, wherein prior to sending the first snapshot data to the first electronic device and the second electronic device, further comprising:
after receiving a close-up request from the first electronic device, sending the close-up request to the second electronic device; the auction request is triggered by friend selection operation of an initiator in the friend list;
and receiving response information from the second electronic equipment, and sending the response information to the first electronic equipment.
14. The method of claim 13, wherein the clap request comprises one or more of: taking photos, taking videos and taking live broadcasts.
15. The method according to claim 13 or 14, wherein after receiving the close-up request from the first electronic device and before sending the close-up request to the second electronic device, further comprising:
receiving a snap template construction request from the first electronic equipment, and determining a construction result of the snap template; the close-up template construction request is triggered by the first electronic equipment responding to the close-up template selection operation of the close-up initiator;
and returning the construction result of the close-shot template to the first electronic equipment.
16. The method of claim 15, wherein the close-up template selection operation comprises: selecting the number of users in a close shot and selecting a background in a close shot; the close-up background comprises: a self-portrait context of the auction originator and a self-portrait context of the auction invitee.
17. The method according to any one of claims 10-16, wherein after sending the second snapshot data to the first electronic device and the second electronic device, further comprising:
receiving a close-shot confirmation request from the first electronic equipment, and generating close-shot confirmation data;
the close-shot confirmation request is triggered by the first electronic device responding to the close-shot confirmation operation of the close-shot initiator after the first electronic device receives the second close-shot data from the server.
18. The method of claim 17, wherein the beat confirmation operation is triggered by one or more of:
the method comprises the steps that a close-shooting initiator clicks a close-shooting confirmation key, the close-shooting initiator and a close-shooting invitee maintain preset expressions, and the close-shooting initiator and the close-shooting invitee keep action for preset time.
19. An electronic device, comprising: one or more processors; one or more memories;
wherein the one or more memories store one or more computer programs, the one or more computer programs comprising instructions, which when executed by the one or more processors, cause the first electronic device to perform the steps of:
the first electronic equipment and the second electronic equipment establish a snapshot service;
the first electronic equipment uploads data to be photographed to a server; the data to be synchronized is video stream data of the first electronic equipment;
receiving first shooting data from the server, and displaying the first shooting data; the first snapshot data comprises a portrait layer from the first electronic device, a portrait layer from the second electronic device and a background layer;
responding to the position adjustment operation of the portrait image layer to obtain position adjustment information;
sending the position adjustment information to the server;
and receiving second close-shot data generated by the server according to the position adjusting information, and displaying the second close-shot data.
20. A server, comprising: one or more processors; one or more memories;
wherein the one or more memories store one or more computer programs, the one or more computer programs comprising instructions, which when executed by the one or more processors, cause the server to perform the steps of:
respectively receiving data to be synchronized from first electronic equipment and data to be synchronized from second electronic equipment; the first electronic equipment and the second electronic equipment establish a close-shooting service; the first electronic device and the second electronic device each comprise at least one camera; the data to be photographed of the first electronic equipment is video stream data of the first electronic equipment; the data to be photographed of the second electronic equipment is video stream data of the second electronic equipment;
synthesizing the data to be close-photographed of the first electronic equipment and the second electronic equipment into first close-photographing data; the first snapshot data comprises a portrait layer from the first electronic device, a portrait layer from the second electronic device and a background layer;
sending the first snapshot data to the first electronic device and the second electronic device;
receiving position adjusting information, and adjusting the first close-shot data according to the position adjusting information to obtain second close-shot data; the position adjustment information is determined in response to a position adjustment operation on the portrait image layer;
and sending the second snapshot data to the first electronic device and the second electronic device.
21. A computer storage medium, characterized in that the storage medium has stored therein program instructions that, when read and executed by one or more processors, implement the method of any of claims 1-9 or 10-18.
22. A graphical user interface on an electronic device with a display screen, one or more memories, and one or more processors to execute one or more computer programs stored in the one or more memories, the graphical user interface comprising a graphical user interface displayed when the electronic device performs the method of any of claims 1-9 or 10-18.
CN202110312721.6A 2020-11-30 2021-03-24 Clapping method, electronic equipment and server Pending CN114640805A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020113844579 2020-11-30
CN202011384457 2020-11-30

Publications (1)

Publication Number Publication Date
CN114640805A true CN114640805A (en) 2022-06-17

Family

ID=81946483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110312721.6A Pending CN114640805A (en) 2020-11-30 2021-03-24 Clapping method, electronic equipment and server

Country Status (1)

Country Link
CN (1) CN114640805A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979495A (en) * 2022-06-28 2022-08-30 北京字跳网络技术有限公司 Method, apparatus, device and storage medium for content shooting
CN116471429A (en) * 2023-06-20 2023-07-21 上海云梯信息科技有限公司 Image information pushing method based on behavior feedback and real-time video transmission system
CN117221712A (en) * 2023-11-07 2023-12-12 荣耀终端有限公司 Method for photographing, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007295036A (en) * 2006-04-20 2007-11-08 Make Softwear:Kk Photograph seal making device, and its control method and control program
CN106375193A (en) * 2016-09-09 2017-02-01 四川长虹电器股份有限公司 Remote group photographing method
CN106657791A (en) * 2017-01-03 2017-05-10 广东欧珀移动通信有限公司 Method and device for generating synthetic image
CN109040643A (en) * 2018-07-18 2018-12-18 奇酷互联网络科技(深圳)有限公司 The method, apparatus of mobile terminal and remote group photo
KR20190094040A (en) * 2018-02-02 2019-08-12 천종윤 A mobile terminal for generating a photographed image and a method for generating a photographed image
CN111050072A (en) * 2019-12-24 2020-04-21 Oppo广东移动通信有限公司 Method, equipment and storage medium for remote co-shooting

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007295036A (en) * 2006-04-20 2007-11-08 Make Softwear:Kk Photograph seal making device, and its control method and control program
CN106375193A (en) * 2016-09-09 2017-02-01 四川长虹电器股份有限公司 Remote group photographing method
CN106657791A (en) * 2017-01-03 2017-05-10 广东欧珀移动通信有限公司 Method and device for generating synthetic image
KR20190094040A (en) * 2018-02-02 2019-08-12 천종윤 A mobile terminal for generating a photographed image and a method for generating a photographed image
CN109040643A (en) * 2018-07-18 2018-12-18 奇酷互联网络科技(深圳)有限公司 The method, apparatus of mobile terminal and remote group photo
CN111050072A (en) * 2019-12-24 2020-04-21 Oppo广东移动通信有限公司 Method, equipment and storage medium for remote co-shooting

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979495A (en) * 2022-06-28 2022-08-30 北京字跳网络技术有限公司 Method, apparatus, device and storage medium for content shooting
CN114979495B (en) * 2022-06-28 2024-04-12 北京字跳网络技术有限公司 Method, apparatus, device and storage medium for content shooting
CN116471429A (en) * 2023-06-20 2023-07-21 上海云梯信息科技有限公司 Image information pushing method based on behavior feedback and real-time video transmission system
CN116471429B (en) * 2023-06-20 2023-08-25 上海云梯信息科技有限公司 Image information pushing method based on behavior feedback and real-time video transmission system
CN117221712A (en) * 2023-11-07 2023-12-12 荣耀终端有限公司 Method for photographing, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN114640805A (en) Clapping method, electronic equipment and server
CN109874021B (en) Live broadcast interaction method, device and system
CN110944109B (en) Photographing method, device and equipment
WO2018153267A1 (en) Group video session method and network device
WO2022062896A1 (en) Livestreaming interaction method and apparatus
US8237771B2 (en) Automated videography based communications
US20100238262A1 (en) Automated videography systems
KR102449670B1 (en) Method for creating video data using cameras and server for processing the method
US20160093020A1 (en) Method of procuring integrating and sharing self portraits for a social network
WO2022048651A1 (en) Cooperative photographing method and apparatus, electronic device, and computer-readable storage medium
WO2022142944A1 (en) Live-streaming interaction method and apparatus
CN111988528A (en) Shooting method, shooting device, electronic equipment and computer-readable storage medium
WO2012019517A1 (en) Method, device and system for processing video in video communication
US11924397B2 (en) Generation and distribution of immersive media content from streams captured via distributed mobile devices
US20200349749A1 (en) Virtual reality equipment and method for controlling thereof
CN104717414A (en) Photo processing method and device
US20220078344A1 (en) Communication terminal, display method, and non-transitory computer-readable medium
CN111147766A (en) Special effect video synthesis method and device, computer equipment and storage medium
US20150062283A1 (en) Methods and apparatus for expanding a field of view in a video communication session
CN111654624A (en) Shooting prompting method and device and electronic equipment
EP3736667A1 (en) Virtual reality equipment capable of implementing a replacing function and a superimposition function and method for control thereof
CN112235510A (en) Shooting method, shooting device, electronic equipment and medium
CN116939275A (en) Live virtual resource display method and device, electronic equipment, server and medium
US20130147980A1 (en) Apparatus and associated method for face tracking in video conference and video chat communications
JP7026364B1 (en) Imaging program and computer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination