CN108933891B - Photographing method, terminal and system - Google Patents

Photographing method, terminal and system Download PDF

Info

Publication number
CN108933891B
CN108933891B CN201710385944.9A CN201710385944A CN108933891B CN 108933891 B CN108933891 B CN 108933891B CN 201710385944 A CN201710385944 A CN 201710385944A CN 108933891 B CN108933891 B CN 108933891B
Authority
CN
China
Prior art keywords
terminal
image
video stream
stream data
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710385944.9A
Other languages
Chinese (zh)
Other versions
CN108933891A (en
Inventor
梁峰起
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710385944.9A priority Critical patent/CN108933891B/en
Publication of CN108933891A publication Critical patent/CN108933891A/en
Application granted granted Critical
Publication of CN108933891B publication Critical patent/CN108933891B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a photographing method, a terminal and a system, wherein the method comprises the following steps: after connection with a second terminal is established, second video stream data collected by a camera of the second terminal in real time are obtained, wherein the second terminal is any one or more terminals except the first terminal; acquiring first video stream data acquired by a camera of a first terminal; respectively acquiring a target image from second video stream data and first video stream data according to a preset rule; synthesizing and displaying the acquired target image on a display screen of the first terminal; and after receiving the photographing instruction, storing the image currently displayed on the display screen of the first terminal. The method realizes real-time combination of users using different terminals or users in different places, improves the flexibility of photographing, meets the individual requirements of the users and improves the user experience.

Description

Photographing method, terminal and system
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a photographing method, a terminal, and a system.
Background
Nowadays, with the rapid development of electronic technology, a terminal with a photographing function is popular and used by the public. Particularly, after various photographing software such as beautiful pictures, beautiful cameras, hundred degree magic shots and the like appear, users record more beautiful moments by using terminals.
However, the current photographing software can only photograph the pictures collected by the camera or synthesize a plurality of photographed pictures. The photo shooting mode is inflexible, the obtained photo has poor applicability, and the personalized requirements of users cannot be met.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first object of the present invention is to provide a photographing method, which synchronizes video stream data in real time through different terminals connected to each other, and synthesizes and displays target images in the different video stream data on a display screen of one terminal, so that users using different terminals or users in different locations can perform real-time combination photographing, thereby improving flexibility of photographing, satisfying personalized requirements of users, and improving user experience.
A second object of the present invention is to provide a first terminal.
The third objective of the present invention is to provide a photographing system.
A fourth object of the invention is to propose a computer-readable storage medium.
To achieve the above object, according to a first aspect of the present invention, there is provided a photographing method, including: after connection with a second terminal is established, second video stream data collected by a camera of the second terminal in real time are obtained, wherein the second terminal is any one or more terminals except the first terminal; acquiring first video stream data acquired by a camera of the first terminal; respectively acquiring a target image from the second video stream data and the first video stream data according to a preset rule; synthesizing and displaying the acquired target image on a display screen of the first terminal; and after receiving a photographing instruction, storing the image currently displayed on the display screen of the first terminal.
To achieve the above object, an embodiment of a second aspect of the present invention provides a first terminal, including:
a processor;
a memory for storing executable instructions of the processor;
the touch display screen is electrically connected with the processor;
wherein the processor is configured to call the program code in the memory to implement the photographing method as described above.
To achieve the above object, an embodiment of a third aspect of the present invention provides a photographing system, including: a first terminal as described above and a second terminal as described above connected to the first terminal.
To achieve the above object, according to a fourth aspect of the present invention, a computer-readable storage medium is provided, on which a computer program is stored, which, when executed by a processor, implements the photographing method as described above.
The photographing method, the terminal and the system of the embodiment of the invention firstly establish the connection relationship between the first terminal and the second terminal, then obtain the second video stream data collected by the camera of the second terminal in real time, simultaneously obtain the first video stream data collected by the camera of the first terminal, then respectively obtain a target image from the obtained second video stream data and the first video stream data according to the preset rule, then synthesize and display the obtained target image on the display screen of the first terminal, and after receiving a photographing instruction of a user, the synthesized image currently displayed on the display screen of the first terminal is saved. Therefore, the users using different terminals or users in different places can be combined in real time, the flexibility of photographing is improved, the personalized requirements of the users are met, and the user experience is improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a photographing method according to an embodiment of the present invention;
fig. 2a to fig. 2d are schematic diagrams illustrating an implementation manner of a first terminal responding to a real-time group photo request sent by a second terminal in an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating whether a first terminal sends a real-time lighting request to a second terminal according to an embodiment of the present invention;
FIG. 4 is a flow chart of a photographing method according to another embodiment of the invention;
FIG. 5 is a flow chart of a photographing method according to yet another embodiment of the present invention;
FIGS. 6 a-6 b are schematic views illustrating decoration of a third image according to an embodiment of the present invention;
FIG. 7 is a flowchart of a photographing method according to another embodiment of the invention;
FIG. 8 is a flow chart of a photographing method according to another embodiment of the invention;
FIG. 9 is a flow chart of a method of taking a picture according to one embodiment of the present invention;
FIG. 10 is a diagram illustrating data interaction between different terminals according to the present invention;
FIGS. 11 a-11 b are schematic diagrams of a target image captured in a video stream according to the present invention;
12 a-12 b are schematic diagrams of a current co-ordinate image displayed on a display screen in an embodiment of the invention;
fig. 13 is a schematic structural diagram of a first terminal according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of a photographing system according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The embodiment of the invention mainly aims to solve the problems that the shooting mode is inflexible and the adaptability of acquiring photos is poor when shooting is performed by the existing shooting software. When a first terminal is connected with a second terminal, the first terminal can acquire second video stream data acquired by a camera of the second terminal in real time and first video stream data acquired by the camera of the first terminal, then respectively acquire a target image from the acquired second video stream data and the acquired first video stream data according to a preset rule, and synthesize and display all the acquired target images on a display screen of the first terminal, and finally store the synthesized image displayed in the display screen of the first terminal according to a received photographing instruction. Therefore, users using different terminals or users in different places can combine photos in real time, the flexibility of photographing is improved, the individual requirements of the users are met, and the user experience is improved.
That is to say, the photographing method provided by the embodiment of the invention can photograph the group photos for the users in different areas,
a photographing method according to an embodiment of the first aspect of the present invention will be described with reference to fig. 1.
Fig. 1 is a flowchart of a photographing method according to an embodiment of the present invention. As shown in fig. 1, the photographing method according to the embodiment of the present invention may include the following steps:
s101, after the first terminal is connected with a second terminal, second video stream data collected by a camera of the second terminal in real time are obtained, wherein the second terminal is any one or more terminals except the first terminal.
In the embodiment of the present invention, the first terminal and the second terminal may be, but are not limited to: the intelligent terminal comprises an intelligent terminal with photographing and network communication functions, such as a smart phone, a personal digital assistant, a PC (personal computer), a tablet computer and the like.
Specifically, in this embodiment of the application, the first terminal may establish a connection with the second terminal in a plurality of ways, for example, through a wireless network, bluetooth, a short-range wireless communication technology, and the like, which is not limited in this application.
It should be noted that, in this embodiment of the application, the second terminal refers to a terminal different from the first terminal, and may include one terminal or a plurality of terminals, and accordingly, the second video stream data may include one video stream data or a plurality of video stream data. This embodiment is not limited to this.
In a specific implementation, before S101, the method may further include:
and respectively receiving and responding to the real-time photo-combination request sent by the second terminal.
Specifically, if a user using the second terminal wants to perform a photo-matching with a user using the first terminal, the real-time photo-matching requests can be sent to the first terminal through the second terminal, so that the first terminal can directly respond to the real-time photo-matching requests sent by the second terminal to establish a connection with the second terminal.
In specific implementation, after receiving a lighting request sent by a second terminal, a first terminal can display the lighting request on a screen of the first terminal, so that a user using the first terminal determines whether to respond to the lighting request, and the user can trigger whether to respond to a real-time lighting request sent by the second terminal in various ways.
For example, as shown in fig. 2a, the user may respond to the co-photographing request by touching the "agree" virtual key on the detection screen, so that the first terminal responds to the real-time co-photographing request sent by the second terminal if it is detected that the "agree" virtual key on the screen is clicked, so as to establish a co-photographing connection with the second terminal.
Alternatively, as shown in fig. 2b, the user may respond to the co-illumination request by pressing the volume key, so that if the first terminal detects that the volume key is pressed, the user responds to the real-time co-illumination request sent by the second terminal, so as to establish a co-illumination connection with the second terminal.
Alternatively, the user may respond to the co-illumination request by entering fingerprint information in the fingerprint identification area, as shown in fig. 2 c. Therefore, if the first terminal detects that the fingerprint information is input into the fingerprint identification area and is matched with the fingerprint information configured in advance, the first terminal responds to the real-time co-illumination request sent by the second terminal so as to establish co-illumination connection with the second terminal.
Alternatively, as shown in fig. 2d, the user may respond to the co-illumination request by inputting a preset pattern on the display screen of the first terminal. Therefore, if the first terminal determines that the user inputs the preset pattern according to the touch operation of the user, the first terminal can respond to the real-time co-shooting request sent by the second terminal so as to establish the co-shooting connection with the second terminal.
It should be noted that the above response modes are only exemplary and are not meant to be specific limitations of the present embodiment.
Alternatively, if the user using the first terminal wants to be associated with the user using the second terminal, the method may include, before S101:
respectively sending real-time photo-combination requests to a second terminal;
and receiving response messages respectively returned by the terminals.
Specifically, a user using a first terminal may trigger sending a real-time lighting request to a second terminal in various ways.
For example, as shown in FIG. 3, the user may trigger a "live-lighting" virtual key on the screen to trigger sending a live-lighting request. Correspondingly, if the first terminal detects that the virtual key of the real-time photo-combination on the display screen is triggered, the first terminal sends a real-time photo-combination request to the second terminal.
Or, the user can trigger the real-time co-illumination request by means of voice input. Correspondingly, if the first terminal detects the voice command of 'sending real-time photo', which is input by the user, the real-time photo request can be sent to the second terminal.
It should be noted that the two manners of sending the real-time photo request to the second terminal are only exemplary and are not meant to be a specific limitation to the embodiment.
Further, after the first terminal establishes a connection with the second terminal, the second terminal may directly send second video stream data acquired by a camera thereof in real time to the first terminal; or, after being connected with the second terminal, the first terminal may also send a video stream data acquisition request to the second terminal to acquire second video stream data acquired by a camera of the second terminal in real time. This embodiment is not limited to this.
S102, first video stream data collected by a camera of a first terminal is obtained.
Specifically, the first terminal may obtain the second video stream data collected by the camera of the second terminal and also obtain the first video stream data collected by the camera of the first terminal.
It should be noted that, in the embodiment of the present application, S102 and S101 may be performed simultaneously, or S102 may be performed first and then S101 is performed, or S101 may be performed first and then S102 is performed, which is not limited herein.
It should be noted that the camera of the first terminal may include a front camera and a rear camera. In this embodiment, the first terminal may capture the first video stream data by using a front camera.
S103, respectively acquiring a target image from the second video stream data and the first video stream data according to a preset rule.
Specifically, after the second video stream data and the first video stream data are acquired, the acquired second video stream data and the acquired first video stream data may be analyzed to respectively acquire one target image according to a preset rule.
The preset rule may be preset, or may be set by a user as needed. This embodiment is not limited to this.
In a possible implementation form of the present application, the S103 may specifically be:
and respectively acquiring a target image from each frame of picture in the second video stream data and the first video stream data.
That is, the first terminal needs to acquire a target image in real time from each frame of the second video stream data and the first video stream data, respectively, that is, the acquired target image changes in real time with the video stream.
Alternatively, in order to reduce the processing load of the first terminal, in step S103, the method may further include:
and respectively acquiring a target image from partial frame pictures in the second video stream data and the first video stream data at preset frame intervals.
The preset frame interval range may be arbitrarily set according to actual needs, and may be, for example, 3 frame intervals, 5 frame intervals, 7 frame intervals, and the like, or may be other frame intervals, which is not specifically limited herein.
That is, if the preset frame interval is 5 frames, the first terminal needs to acquire one target image from the first video stream data and the second video stream data respectively every 5 frames, so that the acquired target images change at certain time intervals.
The target image in the present embodiment may be a person image, a background image, or the like.
And S104, synthesizing and displaying the acquired target image on a display screen of the first terminal.
Specifically, after the first terminal acquires the target images from the second video stream data and the first video stream data, respectively, all the acquired target images may be subjected to synthesis processing, and the synthesized images may be displayed on the display screen of the first terminal.
It should be noted that the first terminal may perform composite display on all the acquired target images in any manner, for example, sequentially display each target image on the display screen from left to right; or, displaying each target image from top to bottom in sequence; alternatively, each target image is displayed sequentially from front to back, and the like, which is not limited in this embodiment.
The method for synthesizing and displaying the target image may be configured in advance or may be established by a user, which is not limited in this embodiment.
For example, if the first terminal performs the synthesized display of the target image, the method includes: and displaying the target image in the first video stream at the center and displaying the target images in the other video streams at the periphery. Then, after the first terminal acquires the target image, the target image in the first video stream is placed in the center of the display screen, and then the target images in the second video stream are respectively placed around the center of the display screen, so that the synthesis and display of all the target images are completed.
It should be noted that, in this embodiment, since the first terminal is a target image acquired from the video stream, and the target image in each frame of the video stream may be different, for example, the expression and the posture of a person in each frame of the video stream may be different, the composite image displayed on the display screen of the first terminal is also changed in real time after the acquired target images are combined.
And because the first video stream is acquired by the camera of the first terminal, after the first terminal synthesizes and displays the target image on the display screen of the first terminal, the first terminal can adjust the posture and expression of the first terminal according to the posture, expression and the like of the target image acquired from the second video stream by using the first user of the first terminal to acquire the synthetic image which the first terminal wants, so that the acquired synthetic image is more natural and more real.
And S105, after receiving the photographing instruction, storing the image currently displayed on the display screen of the first terminal.
Specifically, after the first terminal displays the synthesized image on the display screen, the user can input a photographing instruction at any time after seeing the image which is hopeful to be stored, so that the first terminal stores the image currently displayed on the display screen of the first terminal after receiving the photographing instruction.
It should be noted that, in the embodiment of the present application, the image currently displayed on the display screen of the first terminal is changed in real time, so that a user using the first terminal can select a proper photographing time as needed to obtain different composite images. For example, a user using the first terminal may input a photographing instruction when it is determined that all people in the image are smiling; or when the action of all the persons in the image is determined to be consistent, a photographing instruction is input, and the like, so that different co-shooting images are obtained, and the flexibility of co-shooting and the diversity of the obtained images are improved.
Wherein, the user can input the photographing instruction in the following ways. For example, clicking a photographing key on a display screen of the first terminal; or pressing a volume key of a frame of a display screen of the first terminal; or click on any location of the display screen, etc.
Besides, the user may send a photographing instruction to the first terminal by blinking, voice control, or the like, which is not specifically limited herein.
According to the photographing method provided by the embodiment of the invention, after a first terminal is connected with a second terminal, the first terminal firstly acquires second video stream data acquired by a camera of the second terminal in real time and first video stream data acquired by the camera of the first terminal, then respectively acquires a target image from the acquired second video stream data and the acquired first video stream data according to a preset rule, and then synthesizes and displays the acquired target image on a display screen of the first terminal, and after receiving a photographing instruction of a user, a synthesized image currently displayed on the display screen of the first terminal is saved. Therefore, the users using different terminals or users in different places can be combined in real time, the flexibility of photographing is improved, the personalized requirements of the users are met, and the user experience is improved.
Through the analysis, the first terminal and the second terminal are connected, and after the second video stream data acquired by the camera of the second terminal in real time are acquired, a target image can be acquired from each video stream data respectively, and then the target images are synthesized and displayed on the display screen of the first terminal, so that a user using the first terminal can obtain a needed photo.
Similarly, after the second terminal is established with the first terminal, the first video stream data of the first terminal can also be acquired, so that the target image is acquired from each video stream data, and the target image is synthesized and displayed on the second terminal, so that the user using the second terminal can obtain the desired photo.
Thus, in another embodiment of the present invention, the photographing method may further include the steps of:
and respectively sending the first video stream data collected by the camera of the first terminal to the second terminal.
Specifically, after the first terminal and the second terminal establish a connection relationship, when the first terminal acquires second video stream data acquired by a camera of the second terminal, the first video stream data acquired by the camera of the first terminal can be sent to the second terminal, so that the second terminal performs an operation of acquiring a target image according to the received first video stream data and the second video stream data of the second terminal, and a second user using the second terminal can acquire a desired group photo.
It should be noted that, in this embodiment, the first video stream data and the second video stream data acquired by the first terminal and the second terminal are identical. However, the target image acquired by the first terminal from the first video stream data and the second video stream data may be different from the target image acquired by the second terminal from the first video stream data and the second video stream data, and thus the composite image displayed on the display screen of the first terminal may be different from the composite image displayed on the display screen of the second terminal.
For example, if the first terminal acquires the target image from the video stream data every 5 frames and the second terminal acquires the target image from the video stream data every 3 frames, the target images acquired by the first terminal and the second terminal are not identical, so that the composite image displayed on the display screen of the first terminal may be different from the composite image displayed on the display screen of the second terminal.
Through the analysis, after the first terminal acquires the first video stream data and the second video stream data, the first terminal may acquire the target image from the video stream at a preset frame interval, or acquire the target image from each frame of the video stream. In a possible implementation form of the present application, the first terminal may further obtain the target image according to an instruction of a user, which is described in detail below with reference to fig. 4.
Fig. 4 is a flowchart of a photographing method according to another embodiment of the invention.
As shown in fig. 4, the photographing method according to the embodiment of the present invention may include the following steps:
s401, after the first terminal is connected with a second terminal, second video stream data collected by a camera of the second terminal in real time are obtained, wherein the second terminal is any one or more terminals except the first terminal.
S402, acquiring first video stream data acquired by a camera of a first terminal.
S403, receiving a target image acquisition instruction sent by a first user using a first terminal, wherein the acquisition instruction comprises the type, the range and the acquisition mode of the target image.
Specifically, after the first terminal acquires the second video stream data and the first video stream data, a target image acquisition instruction transmitted by a first user using the first terminal may be received to acquire corresponding target images in the second video stream data and the first video stream data, respectively.
Wherein the type of the image is used to indicate whether the image is a person or a background. Therefore, in the present embodiment, the target image type in the acquisition instruction may include a plurality of types. For example, the following steps are carried out:
example one, a person image in the second video stream data is acquired, and a person image and a background image in the first video stream data are acquired;
example two, the person image and the background image in the second video stream data are acquired, and the person image in the first video stream data is acquired;
example three, the person image in the second video stream data and the first video stream data is acquired.
In addition, in the present embodiment, the range in the acquisition instruction is used to indicate the range of the area to be acquired when the target image in the second video stream data and the first video stream data is acquired.
For example, a person head region in the second video stream data and the first video stream data is acquired; for another example, the head and upper body area of the person in the second video stream data and the first video stream data, and the like are acquired.
Further, in this embodiment, the obtaining manner in the obtaining instruction is used to indicate a matting manner to be adopted when the target image is obtained from the second video stream data and the first video stream data.
Wherein, the sectional drawing mode can include: magic stick, color range, magnetic sleeve, feathering, pen tool, masking plate, channel matting, etc.
It will be appreciated that the boundaries or sharpness of the resulting image may be different due to different matting approaches. Therefore, the user can select different matting modes according to the type of the target image to be acquired. For example, the magnetic sleeve type can pick out an image with clear edge contrast; as another example, a magic wand may be used to extract an image having a large patch of similar color, or to extract an image having a large surrounding area of color.
In a possible implementation form, the target image acquisition instruction sent by the user may not include the acquisition mode, so that the first terminal automatically matches the corresponding acquisition mode according to the type of the target image.
For example, if the target image is a person, the corresponding matting mode is channel matting, and when the first terminal determines that the target image to be acquired by the user is a person image, the first terminal can automatically adopt the channel matting mode to matte the person image from the video stream data.
S404, respectively acquiring a target image from the second video stream data and the first video stream data according to the acquisition instruction.
And S405, synthesizing and displaying the acquired target image on a display screen of the first terminal.
It is understood that the composite image displayed on the display screen of the first terminal includes a background image, and the background image may be a background image in the first video stream data, a background image in the second video stream data, or a background image obtained from a preset background image library.
Accordingly, the target image acquired by the first terminal may include a first image, where the first image includes a background image. Correspondingly, the step S405 specifically includes:
and synthesizing and displaying the second image on the first image.
For example, assuming that in the present embodiment, the first terminal acquires the person image from the second video stream data in accordance with the acquisition instruction of the first user, and thus acquires the person image and the background image from the first video stream data, the person image acquired from the second video stream data may be directly compositely displayed on the background image acquired from the first video stream data at the time of image composition.
Alternatively, in another embodiment of the present invention, the S405 may further include:
acquiring a background image from a preset image library;
and synthesizing and displaying the acquired target image on the background image.
That is, in this embodiment, the target image acquisition instruction sent by the first user is to acquire the person image in the second video stream data and the first video stream data. Then, after the first terminal respectively acquires the target image from the second video stream data and the first video stream data, the first terminal may acquire the required background image from the preset image library, and synthesize the acquired target image on the acquired background image.
The preset image library may be pre-stored in the first terminal by the user, or may be on the network side, which is not limited in this embodiment.
S406, after the photographing instruction is received, the image currently displayed on the display screen of the first terminal is stored.
According to the photographing method provided by the embodiment of the invention, the target images can be respectively acquired from the second video stream data and the first video stream data according to the target image acquisition instruction sent by the first user, and the acquired target images are synthesized and displayed on the display screen of the first terminal. Therefore, different target images can be acquired according to the acquisition requirements of the user. In addition, different background images can be adopted for synthesis operation according to the synthesis requirement of the user, so that the user can obtain favorite photos, the personalized requirement of the user is further met, and the user experience is improved.
Further, after the first terminal displays the target image on the display screen of the first terminal, the displayed image may be adjusted according to the needs of the user, and the photographing method provided by the present application is further described below with reference to fig. 5 and 7.
As shown in fig. 5. Fig. 5 is a flowchart of a photographing method according to still another embodiment of the present invention, where the photographing method according to the embodiment of the present invention may include:
s501, after the first terminal is connected with a second terminal, second video stream data collected by a camera of the second terminal in real time are obtained, wherein the second terminal is any one or more terminals except the first terminal.
S502, acquiring first video stream data acquired by a camera of a first terminal.
S503, receiving a target image acquisition instruction sent by a first user using the first terminal, wherein the acquisition instruction comprises the type, the range and the acquisition mode of the target image.
S504, according to the obtaining instruction, a target image is respectively obtained from the second video stream data and the first video stream data.
And S505, synthesizing and displaying the acquired target image on a display screen of the first terminal.
S506, obtaining a retouching instruction input by a first user using the first terminal, wherein the retouching instruction comprises the identification of the third image and a retouching mode.
The third image is one or more target images in the composite image currently displayed on the display screen of the first terminal.
Specifically, after the target image acquired by the first terminal is synthesized and displayed on the display screen of the first terminal, a retouching instruction input by the first user can be received, so that retouching operation on the synthesized image displayed on the display screen of the first terminal is realized.
In this embodiment, the first user-input retouching instruction may be obtained in the following manners.
The method comprises the steps of firstly, recognizing a first voice input by a first user, and determining a retouching instruction;
the first voice is a cropping voice command input by a user. For example: the user inputs voice commands of 'white skin, small mouth, thin face and the like'.
In example two, the map trimming instruction is determined according to the map trimming identifier selected by the first user.
Specifically, the first terminal can display various image repairing marks on the display screen while synthesizing and displaying the target image, so that the user can trigger and input an image repairing instruction by selecting the corresponding image repairing mark.
In this embodiment, the identifier of the third image may be any identifier that can distinguish the third image from other images, for example, the identifier may refer to a position of the third image in the image displayed by combining, or an identifier of a video stream corresponding to the third image.
In addition, the pattern modification method in this embodiment may include: any one or more of blurring a background, cropping an image, beautifying a person, shaping a person, and filtering, which is not limited in this embodiment.
And S506, carrying out image repairing processing on the third image according to the image repairing instruction.
Specifically, after the cropping instruction input by the first user is acquired, the first terminal may perform cropping processing on the third image according to the cropping instruction.
Accordingly, the third image is subjected to the image repairing processing according to the image repairing instruction in various ways. For example, the following steps are carried out:
example one, the third image is subjected to a translucency process.
Specifically, if the user wants that the combined image includes as much information as possible, the partial image is semi-transparently processed, for example, a character displayed in front is semi-transparently displayed, so that the combined image can see both the character displayed in front and the character or the background displayed in back.
It will be appreciated that the third image may be displayed semi-transparently by modifying the values of the alpha channel of the third image, and that different third images may be set to correspond to different values of the alpha channel, so that the resultant displayed image includes third images of different transparency maps.
Example two, the third image is subjected to a beauty process.
Wherein, the beautifying processing on the human image may include: brightening skin, brightening eyeball, and removing acne mark and speckle.
Example three, the third image is decorated.
The decoration of the person image in the third image may be to wear ornaments, such as glasses, hat, bow tie, ear nail, etc., on the person image, as shown in fig. 6 a.
Or, the background image in the third image may be decorated, and a custom character or pattern, such as a character like haijiao number 8, HAPPY, etc., may be added to the background image, as shown in fig. 6b specifically;
and S507, after receiving the photographing instruction, storing the image currently displayed on the display screen of the first terminal.
According to the photographing method provided by the embodiment of the invention, after the target images respectively acquired from different video stream data are synthesized and displayed, the synthesized image displayed in the display screen of the first terminal can be subjected to image modification treatment according to the image modification instruction input by the user, so that the user can perform personalized image modification according to the requirement of the user to generate a personalized composite photograph with unique emotion of the user, the user experience is improved, the product using enthusiasm of the user is improved, and the stickiness of the user and the product is improved.
Fig. 7 is a flowchart of a photographing method according to another embodiment of the present invention, where the photographing method of this embodiment may include the following steps:
and S701, after the first terminal is connected with a second terminal, acquiring second video stream data acquired by a camera of the second terminal in real time, wherein the second terminal is any one or more terminals except the first terminal.
S702, acquiring first video stream data acquired by a camera of a first terminal.
S703, receiving a target image acquisition instruction sent by a first user using the first terminal, wherein the acquisition instruction comprises the type, the range and the acquisition mode of the target image.
S704, according to the obtaining instruction, respectively obtaining a target image from the second video stream data and the first video stream data.
S705, the acquired target image is displayed on the display screen of the first terminal in a composite mode.
S706, obtaining a retouching instruction input by a first user using the first terminal, wherein the retouching instruction comprises an identifier of the third image and a retouching mode.
And S707, carrying out image trimming processing on the third image according to the image trimming instruction.
S708, acquiring an adjusting instruction input by the first user, wherein the adjusting instruction comprises an identifier of the fourth image.
The fourth image refers to any one or more target images in the composite image displayed on the display screen of the first terminal, and may be the same as or different from the third image.
Specifically, in the embodiment of the present application, after the image trimming processing is performed on the image subjected to the combined display, the fourth image in the image may be further subjected to the adjustment processing.
The first terminal can obtain the adjustment instruction in the following ways.
In an example one, the adjustment instruction is determined according to a sliding operation performed by a first user on a display screen of the first terminal.
For example, the first user may input the adjustment instruction by sliding the fourth image on the display screen of the first terminal. Therefore, the first terminal can control the fourth image to move according to the sliding operation of the user.
Example two, the second voice input by the first user is subjected to voice recognition, and the adjustment instruction is determined.
The second voice is an adjusting voice instruction input by the user. For example: the user can input voice instructions such as "rotate the glasses", "enlarge the person image", "move the person on the left side to the left", and the like.
Example three, the adjustment instruction is determined according to the adjustment button selected by the first user.
Specifically, the first terminal may display an adjustment button, such as a "+", "-" button, on the display screen, and the first user may select the corresponding button to input an adjustment instruction, so that the first terminal may perform a corresponding adjustment operation according to the adjustment instruction selected by the user.
S709, adjusting a range and/or a position of the fourth image according to the adjustment instruction.
Specifically, after the adjustment instruction input by the first user is acquired, the first terminal may perform adjustment processing on the fourth image according to the adjustment instruction.
In this embodiment, the adjusting process performed on the fourth image according to the adjusting instruction may include a plurality of ways, that is, the method for enlarging or reducing the person image and the background image in the fourth image is 1: 2; alternatively, the person decoration in the fourth image is subjected to rotation, shift processing, or the like.
And S710, after receiving the photographing instruction, storing the image currently displayed on the display screen of the first terminal.
According to the photographing method provided by the embodiment of the invention, after the first terminal displays the target images acquired from the video streams on the display screen, the composite image displayed in the display screen of the first terminal can be subjected to image repairing and adjustment according to the instruction of the first user, so that the composite image displayed on the first terminal is more harmonious, the aesthetic requirement of the user is better met, and the user experience is further improved.
Through the analysis, the first terminal can acquire the adjustment instruction or the trimming instruction input by the user in a plurality of ways. Correspondingly, the first terminal may also obtain the photographing instruction in a plurality of ways, and the way in which the first terminal obtains the photographing instruction is described in detail below with reference to fig. 8.
As shown in fig. 8, fig. 8 is a flowchart of a photographing method according to another embodiment of the present invention, and the photographing method according to this embodiment may include the following steps:
s801, after the first terminal is connected with a second terminal, second video stream data acquired by a camera of the second terminal in real time is acquired, wherein the second terminal is any one or more terminals except the first terminal.
S802, first video stream data collected by a camera of a first terminal is obtained.
And S803, receiving a target image acquisition instruction sent by a first user using the first terminal, wherein the acquisition instruction comprises the type, the range and the acquisition mode of the target image.
S804, according to the acquisition instruction, respectively acquiring a target image from the second video stream data and the first video stream data.
And S805, synthesizing and displaying the acquired target image on a display screen of the first terminal.
S806, obtaining a retouching instruction input by a first user using the first terminal, wherein the retouching instruction comprises an identifier of the third image and a retouching mode.
And S807, performing image trimming processing on the third image according to the image trimming instruction.
S808, acquiring an adjusting instruction input by a first user using the first terminal, wherein the adjusting instruction comprises an identifier of a to-be-fourth image.
And S809, adjusting the range and/or the position of the fourth image according to the adjusting instruction.
S810, judging whether the image currently displayed on the display screen of the first terminal meets a preset shooting condition; and if so, triggering to generate a photographing instruction.
In this embodiment, whether the image currently displayed on the display screen of the first terminal meets the preset shooting condition may be determined in multiple ways. For example, the following steps are carried out:
in an example one, whether a target image in an image currently displayed on a display screen of a first terminal meets a first preset shooting condition is judged.
For example, when no target image in the image currently displayed on the display screen of the first terminal is blinking, determining that the target image meets a first preset shooting condition; or when the mouth angle of the target image in the image currently displayed on the display screen of the first terminal rises, determining that the target image meets the first preset shooting condition.
And example two, judging whether the definition of the image currently displayed on the display screen of the first terminal meets a second preset shooting condition.
For example, if the definition of the image currently displayed on the display screen of the first terminal is optimal, it is determined that the second preset shooting condition is satisfied.
According to the photographing method, the first terminal synthesizes and displays the target images respectively acquired from the first video stream and the second video stream on the display screen, and after the image currently displayed on the display screen of the first terminal is subjected to image repairing and adjusting, if the image currently displayed meets the photographing condition, the photographing action can be executed, so that the operation of a user is reduced, the automatic photographing is realized, the photographing efficiency is improved, the acquired co-photograph is more natural, and the definition is higher.
It should be noted that, in this embodiment, when the first terminal determines whether to perform a photographing operation on the image currently displayed on the display screen, other modes may be used besides S810. Specifically, in another embodiment of the present invention, after step S809, the photographing method of the present embodiment further includes the steps of:
and generating a photographing instruction according to the touch operation of a first user using the first terminal.
Specifically, after the current image displayed in the display screen of the first terminal is adjusted, a first user using the first terminal can click a photographing button in the display screen of the first terminal and send a photographing instruction to the first terminal, so that the first terminal performs photographing operation on the current image in the current display screen.
Through the analysis, after the first terminal acquires the target image from the acquired second video stream data and the first video stream data, the acquired target image can be synthesized and displayed on the display screen of the first terminal for photographing, so that the user can acquire the satisfactory group photo.
It should be noted that, because the currently displayed image on the display screen of the first terminal may be different from the currently displayed image on the display screen of the second terminal, the first terminal may further send the photographed image to the second terminal, so that the second terminal obtains a photo album different from the currently displayed image on its display screen, and user experience is further improved.
Therefore, in another embodiment of the present invention, after step S810, the photographing method of this embodiment further includes: and respectively sending the currently displayed images to the second terminals.
That is, by sending the image currently displayed by the first terminal to the second terminal, the user using the second terminal can also obtain the same photo as the first user, and then can release the status according to the obtained photo, or store the obtained photo for commemoration.
It is understood that the photographing method of the above embodiment is also applicable to the second terminal, and the implementation process of the second terminal is the same as that of the first terminal.
The embodiment of the present invention is described below by taking the second terminal as an example, and specifically refer to fig. 9.
As shown in fig. 9, the present invention further provides a photographing method.
FIG. 9 is a flowchart of a photographing method according to an embodiment of the invention. The photographing method of the embodiment may include the following steps:
and S901, after the connection with the first terminal is established, triggering and opening a camera of the second terminal.
For a process of establishing a connection between the second terminal and the first terminal, reference may be made to the detailed description in the foregoing embodiments, which is not described herein again.
Specifically, after the connection with the first terminal is established, the second terminal may automatically turn on a camera of the second terminal, so as to obtain corresponding video stream data through the camera. Correspondingly, it can be understood that the first terminal may also automatically trigger to turn on the camera of the first terminal after determining that the connection with the second terminal is established.
It should be noted that the second terminal may have a front camera and a rear camera, and in this embodiment, the camera opened by the second terminal may be the front camera.
And S902, synchronizing second video stream data acquired by a camera of the second terminal in real time to the first terminal so that the first terminal acquires a target image from the second video stream data, and synthesizing and displaying the acquired target image and the target image acquired by the camera of the first terminal.
Specifically, after the second terminal collects the second video stream data, the second video stream data may be synchronized with the first terminal, so that the first terminal obtains a corresponding target image from the second video stream data and the first video stream data of the first terminal, and synthesizes and displays the obtained target image on the display screen of the first terminal.
According to the photographing method provided by the embodiment of the invention, after the connection with the first terminal is established, the camera of the second terminal is automatically triggered to be opened so as to collect the second video stream data, and the collected second video stream data is synchronized to the first terminal, so that the first terminal can obtain the target image from the obtained second video stream data, and the obtained target image and the target image obtained by the camera of the first terminal are synthesized and displayed. Therefore, the users using different terminals or users in different places can be combined in real time, the flexibility of photographing is improved, the personalized requirements of the users are met, and the user experience is improved.
It should be noted that, through analysis of the foregoing embodiments, it can be known that an image currently displayed on a display screen of the first terminal may be different from an image currently displayed on a display screen of the second terminal, and therefore the first terminal can send the image currently displayed on the display screen to the second terminal, so that the second terminal obtains a photo that is different from the image currently displayed on the display screen of the second terminal, and user experience is further improved.
Thus, in another embodiment of the present invention, after step S901, the photographing method of the present embodiment further includes:
and receiving an image sent by the first terminal, wherein the image comprises a target image acquired by a camera of the second terminal and a target image acquired by a camera of the first terminal.
Specifically, after the second terminal receives the image sent by the first terminal, the image can be saved for commemoration.
The above embodiment is specifically described below by referring to fig. 10 as an embodiment. Fig. 10 is a schematic diagram of data interaction between a first terminal and a second terminal according to the present invention.
Assuming that the data interaction process includes a first terminal a and a second terminal B, the interaction process of the first terminal a and the second terminal B may include the following steps:
s110, the first terminal A sends a real-time photo-combination request to the second terminal B.
Specifically, when a user using a first terminal a wants to perform a photo combination with a user using a second terminal B, the user using the first terminal a may trigger a "real-time photo combination" key on a display screen of the first terminal a, and if the first terminal a detects that the "real-time photo combination" key on the display screen is triggered, the first terminal a sends a real-time photo combination request to the second terminal B;
or, when the user using the first terminal a wants to match with the user using the second terminal B, the user using the first terminal a may trigger the first terminal a to send the real-time matching request by means of voice input.
S111: and the second terminal B sends a response message to the first terminal A and triggers to open a camera in the second terminal.
After receiving the real-time co-shooting request sent by the first terminal a, the second terminal B can display the received request information on a display screen, and prompt the user to receive the real-time co-shooting request sent by the first terminal a through prompt tones. When the user using the second terminal B sees the real-time co-photographing request displayed on the display screen and agrees to perform real-time co-photographing, the user may click an "agree" virtual key on the display screen of the second terminal B to transmit response information to the first terminal a.
S112: and after receiving the response information fed back by the second terminal B, the first terminal A triggers to open the camera in the first terminal.
Specifically, after the first terminal a sends a real-time group lighting request to the second terminal B, it may detect whether response information fed back by the second terminal B is received. And if the second terminal B is detected to feed back the response information, analyzing the received response information, and performing corresponding operation according to the analyzed feedback information. Such as establishing a live-light connection.
And S113, the first terminal A acquires second video stream data acquired by a camera of the second terminal B in real time.
And S114, the first terminal A acquires first video stream data acquired by a camera of the first terminal A in real time.
And S115, the first terminal A sends the first video stream data to the second terminal B in real time.
Specifically, after the first terminal a establishes a connection with the second terminal B, an instruction for acquiring video stream data may be sent to the second terminal B, so as to acquire second video stream data acquired by a camera of the second terminal in real time. Meanwhile, the first terminal A also collects first video stream data in real time through a camera of the first terminal A.
It should be noted that, after the first terminal a establishes connection with the second terminal B, the second terminal B may also obtain video stream data acquired by the camera of the first terminal a in real time and video stream data acquired by the camera of the second terminal B in real time. Namely, the first terminal a and the second terminal a perform the synchronous operation of the video stream data.
S116, the first terminal a obtains the corresponding target image from the second video stream data and the first video stream data according to the default obtaining rule.
And S117, the second terminal B acquires the corresponding target image from the second video stream data and the first video stream data according to the default acquisition rule.
If the acquiescent acquiring rule of the first terminal A is to acquire a person from each frame of the second video stream data, acquiring the person and the background from each frame of the first video stream data; the default acquisition rule of the second terminal B is to acquire the object and the background from each frame of the second video stream data and acquire the object from each frame of the first video stream data. Fig. 11a shows a certain frame of picture in the first video stream, and fig. 11b shows a certain frame of picture in the second video stream.
And S118, the first terminal A synthesizes and displays the acquired target image on a display screen, and stores the currently displayed image after acquiring the photographing instruction.
And S119, the second terminal B synthesizes and displays the acquired target image on a display screen, and stores the currently displayed image after acquiring the photographing instruction.
In addition, after the first terminal a displays the acquired target image on the display screen in a composite manner, the user using the first terminal a can also perform appropriate image modification and adjustment on the displayed composite image. For example, adjusting the position, or orientation, of the target image. When a user using the first terminal a wants to save a composite image displayed on the current display screen, the user can save the current image by clicking a photographing button on the display screen, so that the user can obtain a satisfactory composite photograph.
Wherein the composite image displayed on the display screen of the first terminal a is as shown in fig. 12 a.
Meanwhile, the second terminal B may also synthesize and display the acquired target image on the display screen, and the user using the second terminal B may also adjust the position or direction of the synthesized image according to the adjustment instruction of the user using the second terminal B. The second terminal B may receive a photographing instruction input by a user using the second terminal B after adjusting the composite image displayed on the display screen, and the second terminal B stores the composite image displayed on the display screen according to the photographing instruction.
And S120, the first terminal A sends the stored composite image to the second terminal B.
Specifically, after the first terminal a saves the composite image currently displayed on the display screen, the saved composite image may be transmitted to the second terminal B, so that the user using the second terminal B can also obtain the same composite photograph as the first terminal a.
Similarly, after the second terminal B saves the composite image displayed on the display screen, the saved composite image may be transmitted to the first terminal a, thereby allowing the user using the first terminal a to obtain the same composite photograph as the second terminal B, which saves the composite image as shown in fig. 12B.
According to the embodiment, the video stream data are synchronized in real time, and the target images in the video stream data collected by different terminals are synthesized and displayed on the display screen of one terminal, so that instant photographing is realized, the photographing flexibility is improved, different requirements of users can be met, and the user experience is improved.
Fig. 13 is a schematic structural diagram of a first terminal according to an embodiment of the present invention.
The terminal may be, among other things, a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
Referring to fig. 13, the terminal includes: a processor 1100, a memory 1200, and a display screen 1300; wherein: memory 1200 is a memory for storing executable instructions for a processor; a touch display screen 1300 electrically connected to the processor 1100;
the processor 1100 is configured to call the program codes in the memory 1200 to implement the photographing method of the above-described embodiment.
In addition, the terminal may further include: a power component, a multimedia component, an audio component, an interface for input/output (I/O), a sensor component, and a communication component.
In particular, the memory 1200 may be configured to store various types of data. Examples of such data include instructions for any application or method operating on the terminal, contact data, phonebook data, messages, pictures, videos, etc. The memory 1200 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply assembly provides power to the various components. The power components may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the terminal.
The multimedia component includes a touch-sensitive display screen providing an output interface between the terminal and the user. In some embodiments, the touch display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the terminal is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component is configured to output and/or input an audio signal. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the terminal is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
The I/O interface provides an interface between the processing component and a peripheral interface module, which may be a keyboard, click wheel, button, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly includes one or more sensors for providing various aspects of status assessment to the terminal. For example, the sensor assembly may detect an open/closed state of the terminal, the relative positioning of the components, such as a display and keypad of the terminal, the sensor assembly may also detect a change in position of the terminal or a component of the terminal, the presence or absence of user contact with the terminal, orientation or acceleration/deceleration of the terminal, and a change in temperature of the terminal. The sensor assembly may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly may further include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component is configured to facilitate wired or wireless communication between the terminal and other devices. The terminal may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the terminal may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described document display method.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by processor 1100 of the terminal to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In order to achieve the purpose, the invention further provides a photographing system.
Fig. 14 is a schematic structural diagram of a photographing system according to an embodiment of the present invention.
This photographing system includes: a first terminal 110 and a second terminal 120.
Wherein the first terminal 110 is connected with the second terminal 120.
It should be noted that the first terminal 110 and the second terminal 120 can also be other electronic devices capable of taking pictures besides mobile phones, such as personal digital assistants, tablet computers, and the like. In addition, the second terminal 120 in the photographing system may be one terminal device or a plurality of terminal devices, which is not limited in this embodiment.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. A photographing method is applied to a first terminal, and is characterized by comprising the following steps:
after connection with a second terminal is established, second video stream data collected by a camera of the second terminal in real time are obtained, wherein the second terminal is any one or more terminals except the first terminal;
acquiring first video stream data acquired by a camera of the first terminal;
receiving a target image acquisition instruction, wherein the acquisition instruction comprises the type and the range of a target image;
the type is used for indicating to acquire the character image in the second video stream data and acquire the character image and the background image in the first video stream data; or the image acquisition device is used for indicating to acquire the character image and the background image in the second video stream data and acquiring the character image in the first video stream data;
the range is used for indicating the area range corresponding to the target image;
automatically matching the acquisition modes corresponding to the second video stream data and the first video stream data according to the type of the target image in the acquisition instruction;
according to the matched acquisition mode, acquiring a target image corresponding to the type and the range of the target image from the second video stream data and the first video stream data respectively;
the image boundaries or the definitions of the target images acquired correspondingly in different acquisition modes are different;
synthesizing and displaying the acquired target image on a display screen of the first terminal;
and after receiving a photographing instruction, storing the image currently displayed on the display screen of the first terminal.
2. The method of claim 1, wherein prior to establishing the connection with the second terminal, further comprising:
respectively receiving and responding to the real-time photo-combination request sent by the second terminal;
or,
respectively sending real-time photo-combination requests to the second terminal;
and receiving response messages respectively returned by the second terminals.
3. The method according to claim 1, wherein the acquiring the target image corresponding to the type and the range of the target image from the second video stream data and the first video stream data, respectively, comprises:
respectively acquiring a target image corresponding to the type and the range of the target image from each frame of picture in second video stream data and first video stream data;
or,
and respectively acquiring a target image corresponding to the type and the range of the target image from partial frame pictures in the second video stream data and the first video stream data at preset frame intervals.
4. The method of any of claims 1-3, wherein the target image comprises a first image and a second image, wherein the first image comprises a background image;
the synthesizing and displaying of the acquired target image comprises:
and synthesizing and displaying the second image on the first image.
5. The method according to any one of claims 1 to 3, wherein after the compositely displaying the acquired target image on the display screen of the first terminal, further comprising:
obtaining a retouching instruction input by a first user using the first terminal, wherein the retouching instruction comprises an identifier of a third image and a retouching mode;
and carrying out image repairing processing on the third image according to the image repairing instruction.
6. The method of claim 5, wherein said obtaining the first user-entered cropping instruction comprises:
recognizing a first voice input by the first user, and determining the retouching instruction;
or,
and determining the map repairing instruction according to the map repairing identification selected by the first user.
7. The method of any of claims 1-3, wherein receiving a photograph instruction comprises:
generating the photographing instruction according to touch operation of a first user using the first terminal;
or,
when determining that a target image in the image currently displayed on the display screen of the first terminal meets a first preset shooting condition, generating the shooting instruction;
or,
and generating the photographing instruction when the definition of the image currently displayed on the display screen of the first terminal is determined to meet a second preset photographing condition.
8. A first terminal, characterized in that the terminal comprises:
a processor;
a memory for storing executable instructions of the processor;
the touch display screen is electrically connected with the processor;
wherein the processor is configured to call the program code in the memory to implement the photographing method according to any one of claims 1 to 7.
9. A photographing system comprising a first terminal as claimed in claim 8 and a second terminal as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the photographing method according to any one of claims 1-7.
CN201710385944.9A 2017-05-26 2017-05-26 Photographing method, terminal and system Active CN108933891B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710385944.9A CN108933891B (en) 2017-05-26 2017-05-26 Photographing method, terminal and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710385944.9A CN108933891B (en) 2017-05-26 2017-05-26 Photographing method, terminal and system

Publications (2)

Publication Number Publication Date
CN108933891A CN108933891A (en) 2018-12-04
CN108933891B true CN108933891B (en) 2021-08-10

Family

ID=64451227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710385944.9A Active CN108933891B (en) 2017-05-26 2017-05-26 Photographing method, terminal and system

Country Status (1)

Country Link
CN (1) CN108933891B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110798621A (en) * 2019-11-29 2020-02-14 维沃移动通信有限公司 Image processing method and electronic equipment
CN111050072B (en) * 2019-12-24 2022-02-01 Oppo广东移动通信有限公司 Method, equipment and storage medium for remote co-shooting
CN113489918B (en) * 2020-10-28 2024-06-21 海信集团控股股份有限公司 Terminal equipment, server and virtual photo combining method
CN113068053A (en) * 2021-03-15 2021-07-02 北京字跳网络技术有限公司 Interaction method, device, equipment and storage medium in live broadcast room
CN114979564A (en) * 2022-04-13 2022-08-30 广州博冠信息科技有限公司 Video shooting method, electronic equipment, device, system and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101431613A (en) * 2007-11-09 2009-05-13 陆云昆 Network group photo implementing method
CN102821253A (en) * 2012-07-18 2012-12-12 上海量明科技发展有限公司 Method and system for realizing group photo function through instant messaging tool
CN105872438A (en) * 2015-12-15 2016-08-17 乐视致新电子科技(天津)有限公司 Video call method and device, and terminal
WO2016159164A1 (en) * 2015-03-31 2016-10-06 大和ハウス工業株式会社 Image display system and image display method
CN106412458A (en) * 2015-07-31 2017-02-15 中兴通讯股份有限公司 Image processing method and apparatus
CN106657791A (en) * 2017-01-03 2017-05-10 广东欧珀移动通信有限公司 Method and device for generating synthetic image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103716537B (en) * 2013-12-18 2017-03-15 宇龙计算机通信科技(深圳)有限公司 Picture synthesis method and terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101431613A (en) * 2007-11-09 2009-05-13 陆云昆 Network group photo implementing method
CN102821253A (en) * 2012-07-18 2012-12-12 上海量明科技发展有限公司 Method and system for realizing group photo function through instant messaging tool
WO2016159164A1 (en) * 2015-03-31 2016-10-06 大和ハウス工業株式会社 Image display system and image display method
CN106412458A (en) * 2015-07-31 2017-02-15 中兴通讯股份有限公司 Image processing method and apparatus
CN105872438A (en) * 2015-12-15 2016-08-17 乐视致新电子科技(天津)有限公司 Video call method and device, and terminal
CN106657791A (en) * 2017-01-03 2017-05-10 广东欧珀移动通信有限公司 Method and device for generating synthetic image

Also Published As

Publication number Publication date
CN108933891A (en) 2018-12-04

Similar Documents

Publication Publication Date Title
CN108933891B (en) Photographing method, terminal and system
US10565763B2 (en) Method and camera device for processing image
WO2018153267A1 (en) Group video session method and network device
CN104639843B (en) Image processing method and device
CN107347135B (en) Photographing processing method and device and terminal equipment
WO2017016069A1 (en) Photographing method and terminal
CN105631804B (en) Image processing method and device
CN109922252B (en) Short video generation method and device and electronic equipment
CN113194254A (en) Image shooting method and device, electronic equipment and storage medium
CN111586296B (en) Image capturing method, image capturing apparatus, and storage medium
CN104574299A (en) Face picture processing method and device
CN107426489A (en) Processing method, device and terminal during shooting image
CN112330570A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112669233A (en) Image processing method, image processing apparatus, electronic device, storage medium, and program product
CN115147451A (en) Target tracking method and device thereof
CN109218709B (en) Holographic content adjusting method and device and computer readable storage medium
CN111988522B (en) Shooting control method and device, electronic equipment and storage medium
WO2024114459A1 (en) Method and apparatus for generating 3d hand model, and electronic device
CN112399078B (en) Shooting method and device and electronic equipment
CN112004020B (en) Image processing method, image processing device, electronic equipment and storage medium
CN105488132A (en) Head portrait picture acquisition method and apparatus
CN105426904A (en) Photo processing method, apparatus and device
CN110913120B (en) Image shooting method and device, electronic equipment and storage medium
CN108924529B (en) Image display control method and device
CN108234888B (en) Image processing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant