CN113596323A - Intelligent group photo method, device, mobile terminal and computer program product - Google Patents

Intelligent group photo method, device, mobile terminal and computer program product Download PDF

Info

Publication number
CN113596323A
CN113596323A CN202110792916.5A CN202110792916A CN113596323A CN 113596323 A CN113596323 A CN 113596323A CN 202110792916 A CN202110792916 A CN 202110792916A CN 113596323 A CN113596323 A CN 113596323A
Authority
CN
China
Prior art keywords
image
portrait
shooting
target
mobile terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110792916.5A
Other languages
Chinese (zh)
Inventor
张健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202110792916.5A priority Critical patent/CN113596323A/en
Publication of CN113596323A publication Critical patent/CN113596323A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses an intelligent group photo method, an intelligent group photo device, a mobile terminal and a computer program product, and relates to the technical field of image processing. The intelligent group photo method comprises the following steps: when detecting that a camera of the mobile terminal is opened, generating a shooting preview interface; responding to a first operation instruction of a user on the mobile terminal, and overlapping and displaying a portrait template in a corresponding area of a shooting preview interface; the portrait template comprises a designated portrait area and a reserved portrait area; when the photographing posture of a first photographing object in a photographing preview interface is matched with a designated portrait area, acquiring a photographed first image; acquiring a second image; the photographing posture of a second photographing object in the second image is matched with the reserved portrait area; and fusing the target image in the second image into the first image according to the portrait template to generate a co-photographic image. The method and the device enable the target image after the image fusion to be naturally fused with the images and scenes of the rest persons in the first image.

Description

Intelligent group photo method, device, mobile terminal and computer program product
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an intelligent group photo method, an intelligent group photo device, a mobile terminal, and a computer program product.
Background
In the related art, when no other people help to shoot group photo or people cannot gather together, the photo of one part of the members in the field needs to be shot firstly, then the other part of the photo is manually scratched after the other part of the photo is shot by the members in the field, or the photo of the members which are not in the field is manually scratched, and then the scratched photo is spliced into the photo, so that the group photo with the same frame is obtained.
However, when a general user uses image processing software to perform matting and puzzle processing, the obtained co-frame photo has a poor effect due to the limited professional skill level.
Disclosure of Invention
The invention mainly aims to provide an intelligent group photo method, an intelligent group photo device, a mobile terminal and a computer program product, and aims to solve the technical problem that the same frame group photo effect is poor after picture arrangement in the prior art.
In order to achieve the above object, the present invention provides an intelligent group photo method, which comprises the following steps:
when detecting that a camera of the mobile terminal is opened, generating a shooting preview interface of the mobile terminal;
responding to a first operation instruction of a user on the mobile terminal, and overlapping and displaying a portrait template in a corresponding area of a shooting preview interface; the portrait template comprises a designated portrait area and a reserved portrait area;
when the photographing posture of a first photographing object in a photographing preview interface is matched with a designated portrait area, acquiring a first image photographed by a mobile terminal;
responding to a second operation instruction of the user on the mobile terminal, and acquiring a second image; the photographing posture of a second photographing object in the second image is matched with the reserved portrait area;
fusing a target image in the second image into the first image according to the portrait template to generate a co-shooting image; wherein the target image is a person image of a second photographic subject extracted from the second image.
Optionally, the step of fusing the target image in the second image into the first image according to the portrait template to generate a photographic image includes:
according to the portrait template, a target image in the second image is attached to a region corresponding to the reserved portrait region in the first image to obtain an attached image;
and carrying out image harmony processing on the attached image to generate a co-shooting image.
Optionally, the step of attaching the target image in the second image to the region corresponding to the reserved portrait region in the first image according to the portrait template to obtain an attached image includes:
determining the target center coordinate of the target image in the reserved portrait area according to the center coordinate of the contrast image of the first shooting object in the first image;
determining a target height value of the target image according to the height value of the comparison image and the height ratio of the first shooting object to the second shooting object;
and according to the target center coordinate and the target height value, fitting the target object in a first image to obtain the fitting image.
Optionally, the central coordinates include a comparison central horizontal axis coordinate and a comparison central vertical axis coordinate;
determining the target center coordinate of the target image in the reserved portrait area according to the center coordinate of the contrast image of the first shooting object in the first image, and the method comprises the following steps:
obtaining a target center cross-axis coordinate of the target image in the first image according to the comparison center cross-axis coordinate, the central point cross-axis distance value between the designated portrait area and the reserved portrait area, the height value of the designated portrait area, the height value of the comparison image and a first preset formula; wherein, the first preset formula is as follows:
Figure BDA0003161155540000021
x1is the target center abscissa, x2The horizontal axis coordinate of the comparison center, w is the horizontal axis distance value of the center point of the designated portrait area and the reserved portrait area, H is the height value of the comparison image, and H is the height value of the designated portrait area;
the fitting the target object in a first image according to the target center coordinate and the target height value to obtain the fitted image includes:
obtaining the target central longitudinal axis coordinate of the target image in the first image according to the comparison central longitudinal axis coordinate, the height value of the comparison image, the height value of the first shooting object, the height value of the second shooting object and a second preset formula; wherein the second predetermined formula is:
y1=y2-0.5×H×(H2-H1),
y1is the target central longitudinal axis coordinate, y2For comparing the central ordinate axis coordinates, H is the height value of the comparison image, H2Is a height value of the first subject, H1The height value of the second shooting object is obtained;
obtaining a target height value of the target image in the first image according to the height value of the comparison image, the height value of the first shooting object, the height value of the second shooting object and a third preset formula; wherein, the third preset formula is:
H′=H×(H2/H1)
h' is a target height value.
Optionally, in response to a first operation instruction of the user on the mobile terminal, before the step of superimposing the portrait template on the corresponding area of the shooting preview interface, the method further includes:
identifying and processing a scene to be photographed in a shooting preview interface to obtain scene type information;
and screening the portrait template from a preset portrait template library according to the scene type information and the preset figure body type information.
Optionally, before the step of acquiring the first image captured by the mobile terminal when the shooting pose of the first shooting object in the shooting preview interface matches the designated portrait area, the method further includes:
and when the photographing gesture of the first photographing object in the photographing preview interface does not match the designated portrait area, responding to a third operation instruction of the user on the mobile terminal, and changing the display parameters of the portrait template in the photographing preview interface.
Optionally, the step of acquiring the second image in response to a second operation instruction of the user on the mobile terminal includes:
and responding to a shooting instruction of the user on the mobile terminal, and acquiring a second image shot by the mobile terminal when the shooting posture of the second shooting object in the shooting preview interface is matched with the reserved portrait area.
In a second aspect, an embodiment of the present invention further provides an intelligent group photo apparatus, including:
the interface display module is used for generating a shooting preview interface of the mobile terminal when detecting that a camera of the mobile terminal is started;
the superposition display module is used for responding to a first operation instruction of a user on the mobile terminal and superposing and displaying a portrait template in a corresponding area of the shooting preview interface; the portrait template comprises a designated portrait area and a reserved portrait area;
the image shooting module is used for acquiring a first image shot by the mobile terminal when the shooting posture of the first shooting object in the shooting preview interface is matched with the designated portrait area;
the image acquisition module is used for responding to a second operation instruction of the user on the mobile terminal and acquiring a second image; the photographing posture of a second photographing object in the second image is matched with the reserved portrait area;
the image fusion module is used for fusing a target image in the second image into the first image according to the portrait template so as to generate a co-shooting image; wherein the target image is a person image of a second photographic subject extracted from the second image.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the computer program is configured to implement the steps of the foregoing method.
In a fourth aspect, the present invention also provides a computer program product including executable program code, where the program code, when executed by a processor, performs the method as described above.
According to the intelligent group photo method provided by the embodiment of the invention, a portrait template for displaying an appointed portrait area and a reserved portrait area is superposed on a shooting preview interface, at the moment, the station position relationship and the size relationship between the appointed portrait area and the reserved portrait area are determined, then the on-site personnel take photos according to the appointed portrait area to obtain a first image, the off-site personnel take photos according to the reserved portrait area, then the target image of the off-site personnel is subjected to image matting processing and then fused into the first image according to the portrait template, so that the requirement on the image processing capacity of a user is reduced, the target image after image fusion is obviously naturally fused with the station position, the size and the like of the images of the other on-site personnel in the first image, and is also naturally fused with the scene in the first image.
Drawings
Fig. 1 is a schematic structural diagram of a recommended mobile terminal of an intelligent group photo method according to an embodiment of the present invention;
FIG. 2 is a detailed flowchart of a first embodiment of the intelligent group photo method of the present invention;
FIG. 3 is a diagram of a preview photo interface according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating target image attachment in accordance with an embodiment of the present invention;
FIG. 5 is a schematic detailed flowchart of a second embodiment of the intelligent group photo method of the present invention;
fig. 6 is a detailed flowchart of the intelligent group photo method according to the third embodiment of the present invention.
Fig. 7 is a functional block diagram of an intelligent group photo device according to a first embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the related art, two persons (lovers, friends, fathers and the like) go out to play and want to take a photo of a group photo, and often need to request a third party to help to take a photo, but sometimes, a proper third person does not help to take a photo, or many persons have difficulty in opening the mouth due to bad meaning. And because the people who take the picture are random, the quality of taking the picture cannot be guaranteed, the picture which is not wanted by the shooting object can be shot, and even if the picture is unsatisfied, the picture cannot be taken again conveniently by people who help to take the picture. Or when the people can not gather together, the photos of some of the members in the field need to be shot firstly, then the photos of the members not in the field need to be subjected to manual matting, and then the matting pictures are spliced into the photos, so that the photos with the same frame can be obtained. However, due to the limitation of software use proficiency and aesthetic appreciation capability, a double-person photo generated by a common user by using image processing software can easily give a very false feeling to the user, and the same-frame effect is poor.
The invention provides an intelligent group photo method, which is characterized in that a portrait template for displaying an appointed portrait area and a reserved portrait area is superposed on a shooting preview interface, at the moment, the station position relationship and the size relationship between the appointed portrait area and the reserved portrait area are determined, then the on-site personnel take photos according to the appointed portrait area to obtain a first image, the off-site personnel take photos according to the reserved portrait area, and then the target images of the off-site personnel are subjected to matting processing and then fused into the first image according to the portrait template, so that the requirement on the image processing capacity of a user is reduced, the images of the target after image fusion are obviously naturally fused with the station position, the size and the like of the images of the rest on-site personnel in the first image, and the images are also naturally fused with the scene in the first image.
The inventive concept of the present application is further illustrated below with reference to some specific embodiments.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a recommended mobile terminal of an intelligent group photo method according to an embodiment of the present invention.
The mobile terminal can be used for installing various applications and displaying objects provided in the installed applications, and can be used for mobile phones, tablet computers, various wearable devices, notebook computers or other electronic devices capable of achieving the functions.
The mobile terminal comprises at least one processor 301, a memory 302 and an intelligent group photo method program stored on the memory and executable on the processor, the intelligent group photo method program being configured to implement the steps of the intelligent group photo method as follows.
The processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 301 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen.
Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 302 is used to store at least one instruction for execution by processor 301 to implement the intelligent group photo method provided by method embodiments herein.
The key processing device based on the block chain further comprises: a communication interface 303. A communication interface 303 and at least one peripheral device. The processor 301, the memory 302 and the communication interface 303 may be connected by a bus or signal lines. Various peripheral devices may be connected to communication interface 303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: camera 307, radio frequency circuitry 304, display screen 305, and power supply 306.
The communication interface 303 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 301 and the memory 302. The communication interface 303 is used for the second image transmitted by the server through the network or the mobile terminal through bluetooth or other methods through the peripheral device. In some embodiments, processor 301, memory 302, and communication interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 301, the memory 302 and the communication interface 303 may be implemented on a single chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 304 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 304 communicates with communication networks and other mobile terminals via electromagnetic signals. The rf circuit 304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 304 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 304 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 305 is a touch display screen, the display screen 305 also has the ability to capture touch signals on or over the surface of the display screen 305. The touch signal may be input to the processor 301 as a control signal for processing. At this point, the display screen 305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 305 may be one, the front panel of the electronic device; in other embodiments, the display screens 305 may be at least two, respectively disposed on different surfaces of the electronic device or in a folded design; in still other embodiments, the display screen 305 may be a flexible display screen disposed on a curved surface or a folded surface of the electronic device. Even further, the display screen 305 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 305 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The power supply 306 is used to power the various components in the mobile terminal. The power source 306 may be alternating current, direct current, disposable or rechargeable. When the power source 306 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
The camera 307 is used to take a picture and display the currently previewed scene in a camera preview window of the display screen 305. When a user clicks the camera preview window or inputs a corresponding operation instruction through other external equipment such as a wireless keyboard, the camera preview window displays a corresponding icon or displays a corresponding picture from a gallery.
In some application scenarios, a user may interact with the mobile terminal through any suitable type of one or more interaction means, such as a touch screen, a keyboard, a remote controller, or the like, based on which the user may operate the mobile terminal to achieve a user intention, such as filtering, or moving an icon or zooming in and out, or the like.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the mobile terminal and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
In addition, for convenience of understanding, terms referred to in the embodiments of the present invention are explained below:
augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding corresponding images, videos and 3D models, and aims to cover a virtual world on a screen in the real world and perform interaction. The AR display technology is widely applied to terminal devices, for example, a terminal device using an IOS system may implement AR display using an ARKit function, and a terminal device using an Android system may implement AR display using an AR Core.
It is to be understood that any number of elements in the figures are provided by way of example and not limitation, and any nomenclature is used for distinction only and not limitation.
The invention provides a first embodiment of an intelligent group photo method. Referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of the intelligent group photo method of the present invention.
The embodiment comprises the following steps:
and step S101, when the camera of the mobile terminal is detected to be turned on, generating a shooting preview interface of the mobile terminal.
The embodiment of the application specifically describes the mobile terminal as the smart phone. It should be noted that a scenario in which the mobile terminal is a tablet computer or other type of electronic device is easily conceivable by those skilled in the art.
The user can open the camera of the mobile terminal through the photographing application installed on the mobile terminal or the photographing function built in the WeChat application. After the camera of the mobile terminal is opened, the scene that can be shot by the camera is displayed on the display interface of the mobile terminal in real time through the shooting preview interface. The photographing preview interface may include a scene display area that displays a picture of a scene to be photographed, which is acquired by a current camera. Therefore, after the user opens the camera and aligns the scene to be shot of the user's mind instrument, the picture of the scene to be shot is displayed on the shooting preview interface.
It will be appreciated that, referring to fig. 3, the capture preview interface may further include other display information display areas for displaying corresponding operational controls, such as a portrait template library control. Or the shooting icon, so that the mobile terminal shoots the picture of the scene to be shot in response to the user touching the shooting control and displays the picture on the mobile terminal. The user looks up the picture to observe the imaging effect of the mobile terminal, so as to further judge whether the lighting conditions and the like of the scene to be shot meet the requirements of the user. And after the picture of the scene to be shot is shot, the user can also be facilitated to carry out editing operation on the picture, such as operations of adding a portrait template or dragging the portrait template and the like.
And S102, responding to a first operation instruction of a user on the mobile terminal, and displaying a portrait template in a superposition mode in a corresponding area of a shooting preview interface. The portrait template comprises a designated portrait area and a reserved portrait area.
The portrait template may be displayed on the shooting preview interface in an overlaid manner through an augmented reality technology, and those skilled in the art know how to implement the augmented reality technology, and details thereof are not described here.
The user can open a portrait template library control on the shooting preview interface through clicking operation so as to select the portrait template from the portrait template library. The portrait template may be a character pose mask. And the character posture mask comprises three parts, namely a mask main body part, a designated portrait area and a reserved portrait area. In a specific portrait template, the size relationship and the position relationship between the designated portrait area and the reserved portrait area are determined, so that when pictures are fused, task images shot twice can be naturally fused in the same scene, and the positions and the sizes of the two task images are more consistent with the actual situation of the scene to be shot.
Referring to fig. 3, the character pose mask is a rectangular shadow region, and a left region and a right region of the rectangular shadow region are respectively set aside to form a human image region. Any one of the two left-out portrait areas is a designated portrait area, and the other one is a reserved portrait area. Within the rectangular shaded area, the mutual positions and sizes of the two white portrait areas are determined.
It is understood that the designated portrait area and the reserved portrait area may each include at least one portrait. For example, when two persons group photo, the designated portrait area and the reserved portrait area respectively comprise a portrait. For example, when three persons group photo, if the persons in the scene include 2 persons, the designated portrait area includes two portraits, and the reserved portrait area includes one portrait.
For example, in one embodiment, step S102 includes:
and A10, responding to a first operation instruction of a user on the mobile terminal, and screening out portrait templates from a plurality of portrait templates to be selected.
If the user can open the portrait template library control on the shooting preview interface through clicking operation so as to display the portrait templates to be selected on the shooting preview interface, the user can sequentially display the plurality of portrait templates to be selected on the shooting preview interface through sliding operation on the mobile terminal, and after seeing the portrait template of the user psychoscope, the user can determine the portrait template as the display area of the portrait template to be selected through clicking the display area of the portrait template.
And step A20, in response to a sliding operation instruction of the user on the mobile terminal, displaying a portrait template in an overlapping manner in a corresponding area of the shooting preview interface.
After the character posture template is determined, the user can display the character posture template on the corresponding position on the shooting preview interface in an overlapped mode through dragging operation on the smart phone. It can be understood that the specific position and the specific size of the character gesture template can be adjusted by the user to adapt to the specific size of the object to be photographed in the scene to be photographed, that is, the size relationship and the position relationship of the designated portrait area and the reserved portrait area on the photographing preview interface are both matched with the scene to be photographed.
And step S103, acquiring a first image shot by the mobile terminal when the shooting posture of the first shooting object in the shooting preview interface is matched with the designated portrait area.
Referring to fig. 4, after the portrait template is displayed in a superimposed manner on the shooting preview interface, the user may direct or indicate that the first shooting object stands in the scene to be shot, and make a corresponding gesture according to the designated portrait area. The first photographic subject can also look at the designated portrait area in advance, so that the posture of the first photographic subject which needs to be put out can be known. And the user adjusts the standing position and the posture of the first shooting object through the specified portrait area in the shooting preview interface until the image of the first shooting object is matched with the specified character template in the shooting preview interface. If in the shooting preview interface, the imaging of the first shooting object appears in the left blank portrait of the character posture template, and when the edge of the imaging is attached to the edge of the left blank portrait, the user can touch the shooting control on the display screen to obtain the first image shot by the mobile terminal. The mobile terminal may save the first image locally.
It is understood that the first photographic subject includes at least one person, which may be imaged in the first image in its entirety in a single shot. The second photographic subject may also include at least one person, which may be imaged in the second image in a single shot.
And step S104, responding to a second operation instruction of the user on the mobile terminal, and acquiring a second image.
And the photographing posture of the second photographing object in the second image is matched with the reserved portrait area.
The second image may be acquired in two ways:
(1) the first mode is as follows: when the second shooting object is in the field, the second image can be obtained by shooting in the field through the mobile terminal. Namely, the second shooting object operates the mobile terminal to shoot the first shooting object to obtain a first image. And after the first image is obtained, the second shooting object gives the mobile terminal to the first shooting object, and the first shooting object operates the mobile terminal to shoot the second shooting object to obtain a second image. In the process, the first shooting object adjusts the standing position and the posture of the second shooting object through the reserved portrait area of the shooting preview interface until the image of the second shooting object is matched with the reserved character template in the shooting preview interface.
In this way, the second operation instruction may be to click the "shooting" control.
In this case, step S104 is adaptively changed to S104': and responding to a shooting instruction of the user on the mobile terminal, and acquiring a second image shot by the mobile terminal when the shooting posture of the second shooting object in the shooting preview interface is matched with the reserved portrait area.
The second mode is as follows: the user can send the portrait template or the preview picture with the portrait template or the reserved portrait area selected by the frame to the second shooting object through an image sending control applied by WeChat and the like on the mobile terminal. Or a sharing control is preset on the shooting preview interface, after the user clicks the sharing control, a contact list is displayed on the shooting preview interface, and after the user selects and clicks a corresponding contact in the contact list, the portrait template or the preview picture with the portrait template or the reserved portrait area selected by the frame can be sent to a second shooting object. In this way, the second operation instruction may be to click on the "share" control.
And after receiving the portrait template, the second shooting object shoots a personal picture according to the reserved portrait area, namely the second image, and sends the second image back to the mobile terminal through the network. The user receives the second image over the network.
It can be understood that when the second shooting object shoots the second image, the posture of the second shooting object is matched with the reserved portrait area, and the size of the second shooting object can be adjusted when the images are fused.
And S105, fusing a target image in the second image into the first image according to the portrait template to generate a co-shooting image. Wherein the target image is a person image of a second photographic subject extracted from the second image.
After the user receives the second image through the mobile terminal, the user can scratch the portrait area in the second image through scratching processing to obtain a target image. Or segmenting the foreground and the background through background segmentation processing, such as an image matching algorithm, to obtain a target image. And fusing the target image to a reserved portrait area in the first image, and adjusting the size of the target image according to the reserved portrait area, thereby generating a co-shooting image.
Specifically, step S105 may include:
and step B10, pasting the target image in the second image to the region corresponding to the reserved portrait region in the first image to obtain a pasted image.
In the step, after the target image is cut off, the portrait template is known and determined, that is, the relative position between the reserved portrait area and the designated portrait area is determined, so that the target image can be directly attached to the area corresponding to the reserved portrait area of the first image.
In a specific scenario, if the distances between the first object and the second object in front of the camera are relatively consistent, step B10 may include:
and S101, determining the target center coordinate of the target image in the reserved portrait area according to the center coordinate of the contrast image of the first shooting object in the first image.
The comparison image corresponds to the designated portrait area and the target image corresponds to the reserved portrait area, so that the target center coordinate of the target image can be determined according to the center coordinate of the comparison image in the first image.
Referring to fig. 4, in the first image, a plane coordinate system may be established according to the height direction and the horizontal direction of the first image. In a first image, an imaging area of a first shooting object and a background area are included, and the coordinate of the central point of the imaging area in a plane coordinate system is extracted as (x)2,y2). Wherein x is2For comparison of the central abscissa coordinate, y2Are the comparison center vertical axis coordinates.
When the target image is attached to the first image, the coordinates of the center point of the target image in the plane coordinate system are: (x)1,y1),x1Is the center abscissa coordinate of the target, y1Is the target central longitudinal axis coordinate. The size of the target image also needs to be scaled since the second image is taken of a different scene than the first image. Considering that the sizes and the distances of the specified portrait area and the reserved portrait area in the portrait template are determined and known, the (x) can be calculated by combining the actual heights of the first photographic object and the second photographic object through the position relation and the size relation between the specified portrait area and the reserved portrait area and combining the actual heights of the first photographic object and the second photographic object1,y1) And a target height value of the target image in the first image.
The target center coordinates may be referred to in detail in the following steps:
(1) obtaining a target center cross-axis coordinate of the target image in the first image according to the comparison center cross-axis coordinate, the central point cross-axis distance value between the designated portrait area and the reserved portrait area, the height value of the designated portrait area, the height value of the comparison image and a first preset formula; wherein, the first preset formula is as follows:
Figure BDA0003161155540000121
x1the horizontal axis coordinate of the target center, w is the horizontal axis distance value of the center point of the designated portrait area and the reserved portrait area, H is the height value of the contrast image, and H is the height value of the designated portrait area.
(2) Obtaining the target central longitudinal axis coordinate of the target image in the first image according to the comparison central longitudinal axis coordinate, the height value of the comparison image, the height value of the first shooting object, the height value of the second shooting object and a second preset formula; wherein the second predetermined formula is:
y1=y2-0.5×H×(H2-H1),
y1is the coordinate of the central longitudinal axis of the target, H is the height value of the contrast image, H2Is a height value of the first subject, H1Is the height value of the second photographic subject.
And S102, determining the target height value of the target image according to the height value of the comparison image and the height ratio of the first shooting object to the second shooting object.
Because the distance between the first shooting object and the second shooting object in front of the camera is more consistent with the distance between the first shooting object and the second shooting object and the camera, the imaging proportion of the first shooting object and the second shooting object is more consistent. Therefore, the size of the target image can be calculated from the imaging ratio of the first captured image. Thereby scaling the target image.
For example, the target image may be scaled according to the imaging scale over the height of the image.
Specifically, the target height value of the target image in the first image is obtained according to the height value of the comparison image, the height value of the first shooting object, the height value of the second shooting object and a third preset formula.
Wherein, the third preset formula is:
H′=H×(H2/H1);
h' is a target height value.
In the step, because the distances between the first shooting object and the second shooting object and the camera are consistent, the ratio of the actual heights of the first shooting object and the second shooting object is consistent with the ratio of the target image and the contrast image in the first image, and therefore, a target height value can be calculated.
And step B105, attaching the target object in the first image according to the target center coordinate, the target center longitudinal axis coordinate and the target height value to obtain an attached image.
In this embodiment, based on the above steps, the target image may be attached to the first image on the basis of ensuring that the position of the portrait attachment is correct and the size of the portrait attachment conforms to the actual situation of the scene to be photographed.
And step B20, carrying out image harmony processing on the bonding image to generate a co-shooting image.
The problem of mismatching of brightness and luster and the like of the bonded image after direct bonding is solved. Taking conditions (such as time, season, light, and weather) as a domain, the present embodiment may use GAN (global adaptive Nets) to harmonize images and zoom in the harmonious foreground and background domains. Specifically, an image fusion model obtained by a generative confrontation network training is utilized to perform image fusion on the fit image, so that the split feeling between the outline of the target image and the background image is eliminated, and the target image is in a natural fusion state.
Furthermore, in some embodiments, after step a20, the present embodiment further includes:
step B30, the domain of the target image and the background in the merged image is processed by the preset domain verification discriminator.
The preset domain verification discriminator can make the domains of the foreground and the background in the generated image as close as possible so as to further improve the effect of image fusion and make the photographic image as realistic as possible.
In this embodiment, through the above steps, the objects to be photographed that need to be shaded take their respective images according to the corresponding portrait templates, and after obtaining at least two pictures, fuse into the picture with the scene to be photographed through the position and size relationship between the portrait templates, thereby reducing the requirement on the image processing capability of the user, and significantly improving the fusion effect of the shaded images, so that after the images are fused, the display effects of the position, size, and the like of the target image after the cutout processing and the images of the rest of the people in the first image are more natural, and the fusion with the scene in the first image is more natural.
On the basis of the first embodiment of the intelligent group photo method, a second embodiment of the intelligent group photo method is provided. Referring to fig. 5, fig. 5 is a schematic flow chart of the first embodiment of the intelligent group photo method of the present invention.
In this embodiment, the method includes the steps of:
step S201, when the camera of the mobile terminal is detected to be opened, a shooting preview interface of the mobile terminal is generated.
Step S202, identifying the scene to be photographed in the shooting preview interface so as to obtain the scene type information.
In the step, the method can be applied to a scene recognition model obtained by training the ResNet deep neural network. The scene types of the scenes to be shot in the current shooting preview interface, such as hundreds of scenes including seaside, mountain top, grassland, square, stage, multi-cloud, indoor and the like, are obtained through scene recognition, and vectors output by the scene recognition model are subjected to DNN (Deep Neural Network ) conversion to generate 6-dimensional vectors which are marked as Y (Y1, Y2, Y3, Y4, Y5 and Y6).
And S203, screening the portrait template from a preset portrait template library according to the scene type information and the preset portrait body type information.
The preset figure body type information comprises the sex, the height and the weight of the first shooting object and the second shooting object.
The portrait template screened from the preset portrait template library can be processed through a pre-trained portrait template screening model. The portrait template screening model receives input vector type scene type information and preset portrait type information and then outputs at least one portrait template. At least one portrait template output by the portrait template screening template is a series of portrait templates suitable for the scene to be shot. For example, the scene to be shot is the flower sea, and the at least one portrait template output by the portrait template screening template comprises a two-person backrest posture template, a two-person side-by-side sitting posture template, a two-person love posture template and the like.
And the user can display the output at least one portrait template on the shooting preview interface by clicking a portrait template library control on the shooting preview interface. The user can select the portrait template of the self-mind apparatus from the at least one portrait template.
For example, when the intelligent group photo is a group photo of two persons, the user inputs the sex, height, weight and other information of the two persons before the group photo. The information of sex, height, weight and the like can be used as the information input of the portrait template screening model. The specific information input of the portrait template screening model comprises the following steps:
h (H1, H2) representing a combination of heights of the two persons;
s (S1, S2) represents the sex combination of two persons, and in the training model, the sex combination can comprise any one of men, women and children; and the number of the first and second groups,
w (W1, W2) represents a combination of body weights of two persons.
The portrait template screening model uniformly represents input H (H1, H2), S (S1, S2) and W (W1, W2) as a 6-dimensional vector X ═ (H1, H2, S1, S2, W1, W2) ═ X1, X2, X3, X4, X5 and X6.
In the portrait template screening model, a portrait template is screened from a preset portrait template library through two input 6-dimensional vectors. For example, the portrait template screening model obtains a photographing classification result represented by a two-dimensional vector through two input 6-dimensional vectors, and obtains a series of photographing postures belonging to the classification through the distribution positions of the two-dimensional vectors in classification plates in a preset portrait template library, such as a part framed by a screening frame with a preset radius r and the two-dimensional vector as a circle center, according to the photographing classification result.
It can be understood that the user can also identify or collect information such as height, sex and the like of each shooting object through a camera of the mobile terminal before the photo taking, and receive input information such as weight and the like.
And S204, responding to a first operation instruction of the user on the mobile terminal, and displaying a portrait template in a superposition manner in a corresponding area of the shooting preview interface. The portrait template comprises a designated portrait area and a reserved portrait area.
And step S205, when the photographing gesture of the first photographing object in the photographing preview interface is matched with the designated portrait area, acquiring a first image photographed by the mobile terminal.
And step S206, responding to a second operation instruction of the user on the mobile terminal, and acquiring a second image.
And step S207, fusing the target image in the second image into the first image according to the portrait template to generate a co-shooting image. Wherein the target image is a person image of a second photographic subject extracted from the second image.
In the embodiment, through scene recognition and figure body type information, a photographing gesture which is suitable for the scene can be recommended to a user, so that fusion of the target image and the first image is more natural, the background is more attached, and the image imaging effect is better.
On the basis of the first embodiment and the second embodiment of the intelligent group photo method of the present invention, a third embodiment of the intelligent group photo method of the present invention is provided. Referring to fig. 6, fig. 6 is a schematic flow chart of the first embodiment of the intelligent group photo method of the present invention.
In this embodiment, the method includes the steps of:
and S301, when the camera of the mobile terminal is detected to be turned on, generating a shooting preview interface of the mobile terminal.
And S302, in response to a first operation instruction of a user on the mobile terminal, overlaying and displaying a portrait template in a corresponding area of a shooting preview interface. The portrait template comprises a designated portrait area and a reserved portrait area.
And step S303, when the photographing gesture of the first photographing object in the photographing preview interface is not matched with the designated portrait area, responding to a third operation instruction of the user on the mobile terminal, and changing the display parameters of the portrait template in the photographing preview interface.
The third operation instruction may be a sliding operation of the user on a touch screen of the mobile terminal, or a zooming operation or an operation of clicking a corresponding control on the shooting preview interface.
The display parameter can be a display area of the portrait template in the shooting preview interface, or a display size ratio of the portrait template in the shooting preview interface, or a display angle of the portrait template in the shooting preview interface.
It can be understood that, due to the relation of the depth of field in the shooting preview interface, the area selected for the portrait template by the user for the first time may be difficult to adapt to the actual shooting situation, at this time, the step may change the display parameters of the portrait template in the shooting preview interface in response to the third operation instruction of the user on the mobile terminal until the shooting gesture of the first shooting object in the shooting preview interface matches with the specified portrait area to obtain the ideal first image.
For example, when a user takes a picture at sea, in order to put a wide sea into the mirror, the user places an object to be photographed at a position far from the camera. At this time, the image of the object to be shot in the shooting preview interface only occupies a partial region in the specified portrait template, and the image cannot be completely matched. At the moment, the user can zoom the portrait template to a proper size by clicking the zoom control on the shooting preview interface, slide the portrait template to an ideal position by the sliding operation of the user on the shooting preview interface until the shooting posture of the first shooting object in the shooting preview interface is matched with the specified portrait area, and click the shooting control on the shooting preview interface by the user to obtain the first image.
And step S304, when the photographing gesture of the first photographing object in the photographing preview interface is matched with the designated portrait area, acquiring a first image photographed by the mobile terminal.
And step S305, responding to a second operation instruction of the user on the mobile terminal, and acquiring a second image.
And S306, fusing the target image in the second image into the first image according to the portrait template to generate a co-shooting image. Wherein the target image is a person image of a second photographic subject extracted from the second image.
In this embodiment, based on the above steps, when the user shoots the first image including the scene to be shot, the display parameters of the portrait template are adjusted through interaction between the user and the mobile terminal, so that the portrait template and the scene to be shot adapt to each other more flexibly and naturally, the first image with an ideal shooting effect is obtained, and an ideal co-shooting image is obtained.
In addition, the invention also provides a first embodiment of the intelligent group photo device. Referring to fig. 7, fig. 7 is a functional module diagram of the first embodiment of the intelligent group photo device.
In this embodiment, the intelligent group photo device includes:
the interface display module 10 is configured to generate a shooting preview interface of the mobile terminal when detecting that a camera of the mobile terminal is turned on;
the superposition display module 20 is used for responding to a first operation instruction of a user on the mobile terminal, and superposing and displaying a specified portrait template in a corresponding area of the shooting preview interface; the portrait template comprises a designated portrait area and a reserved portrait area;
the image shooting module 30 is configured to obtain a first image shot by the mobile terminal when the shooting pose of the first shooting object in the shooting preview interface matches the designated portrait area;
the image acquisition module 40 is used for responding to a second operation instruction of the user on the mobile terminal and acquiring a second image; the photographing posture of a second photographing object in the second image is matched with the reserved portrait area;
the image fusion module 50 is used for fusing the target image in the second image into the first image according to the reserved portrait area to generate a co-shooting image; wherein the target image is a person image of a second photographic subject extracted from the second image.
In one embodiment, the image fusion module 50 includes:
the image attaching unit is used for attaching the target image in the second image to a region corresponding to the reserved portrait region in the first image to obtain an attached image;
and the image fusion unit is used for carrying out image harmony processing on the attached image to generate a co-shooting image.
In one embodiment, the image attaching unit includes:
the central coordinate determining subunit is used for determining the central coordinate of the target image in the reserved portrait area according to the central coordinate of the contrast image of the first shooting object in the first image;
the imaging height determining subunit is used for determining a target height value of the target image according to the height value of the comparison image and the height ratio of the first shooting object to the second shooting object;
and the attaching subunit is used for attaching the target object in a first image according to the target center coordinate and the target height value to obtain the attached image.
In an embodiment, the central coordinate determination subunit is configured to obtain a target central horizontal axis coordinate of the target image in the first image according to the comparison central horizontal axis coordinate, a central horizontal axis distance value between the designated portrait area and the reserved portrait area, a height value of the designated portrait area, a height value of the comparison image, and a first preset formula; wherein, the first preset formula is as follows:
Figure BDA0003161155540000181
x1is the target center abscissa, x2Is the coordinate of the cross axis of the comparison center, w is the distance value of the cross axis of the center point of the designated portrait area and the reserved portrait area, H is the height value of the comparison image, and H is the indexDetermining the height value of the portrait area;
obtaining the target central longitudinal axis coordinate of the target image in the first image according to the comparison central longitudinal axis coordinate, the height value of the comparison image, the height value of the first shooting object, the height value of the second shooting object and a second preset formula; wherein the second predetermined formula is:
y1=y2-0.5×H×(H2-H1),
y1is the target central longitudinal axis coordinate, y2For comparing the central ordinate axis coordinates, H is the height value of the comparison image, H2Is a height value of the first subject, H1Is the height value of the second photographic subject.
The imaging height determining subunit is used for obtaining a target height value of the target image in the first image according to the height value of the comparison image, the height value of the first shooting object, the height value of the second shooting object and a third preset formula;
the third preset formula is:
H′=H×(H2/H1);
h' is a target height value.
In one embodiment, the intelligent group photo device further comprises:
the scene recognition module is used for recognizing the scene to be photographed in the shooting preview interface so as to obtain the scene type information;
and the first portrait template screening module is used for screening out the portrait templates from the preset portrait template library according to the scene type information and the preset portrait body type information.
In one embodiment, the intelligent group photo device further comprises:
and the portrait moving module is used for responding to a third operation instruction of the user on the mobile terminal when the photographing gesture of the first photographing object in the photographing preview interface is not matched with the designated portrait area, and changing the display parameters of the portrait template in the photographing preview interface.
In an embodiment, the image obtaining module 40 is configured to, in response to a shooting instruction of a user on the mobile terminal, obtain a second image shot by the mobile terminal when a shooting pose of a second shooting object in the shooting preview interface matches the reserved portrait area.
Other embodiments or specific implementation manners of the intelligent group photo device of the present invention may refer to the above method embodiments, and are not described herein again.
In addition, an embodiment of the present invention further provides a computer program product, where the computer program product stores an intelligent group photo program, and the intelligent group photo program implements the steps of the above intelligent group photo method when executed by a processor. Therefore, a detailed description thereof will be omitted. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in embodiments of the computer-readable storage medium referred to in the present application, reference is made to the description of embodiments of the method of the present application. It is determined that, by way of example, the program instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
It should be noted that the above-described embodiments of the apparatus are merely schematic, where units illustrated as separate components may or may not be physically separate, and components illustrated as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention may be implemented by software plus necessary general hardware, and may also be implemented by special hardware including special integrated circuits, special CPUs, special memories, special components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, the implementation of a software program is a more preferable embodiment for the present invention. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, where the computer software product is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a Read-only memory (ROM), a random-access memory (RAM), a magnetic disk or an optical disk of a computer, and includes instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An intelligent group photo method, characterized in that the method comprises the following steps:
when detecting that a camera of a mobile terminal is opened, generating a shooting preview interface of the mobile terminal;
responding to a first operation instruction of a user on the mobile terminal, and displaying a portrait template in a superposition manner in a corresponding area of the shooting preview interface; the portrait template comprises a designated portrait area and a reserved portrait area;
when the photographing posture of a first photographing object in the photographing preview interface is matched with the designated portrait area, acquiring a first image photographed by the mobile terminal;
responding to a second operation instruction of the user on the mobile terminal, and acquiring a second image; the photographing posture of a second photographing object in the second image is matched with the reserved portrait area;
according to the portrait template, fusing a target image in the second image into the first image to generate a co-shooting image; wherein the target image is a person image of a second photographic subject extracted from the second image.
2. The intelligent group photo method according to claim 1, wherein the step of fusing the target image in the second image into the first image according to the portrait template to generate a group photo, comprises:
according to the portrait template, a target image in the second image is attached to a region corresponding to the reserved portrait region in the first image to obtain an attached image;
and carrying out image harmony processing on the fit image to generate the co-shooting image.
3. The intelligent group photo method according to claim 2, wherein the step of attaching the target image in the second image to the region corresponding to the reserved portrait region in the first image according to the portrait template to obtain an attached image comprises:
determining the target center coordinate of the target image in the reserved portrait area according to the center coordinate of the contrast image of the first shooting object in the first image;
determining a target height value of the target image according to the height value of the comparison image and the height ratio of the first shooting object to the second shooting object;
and according to the target center coordinate and the target height value, fitting the target object in a first image to obtain the fitting image.
4. The intelligent group photo method of claim 3, wherein the center coordinates comprise a comparison center horizontal axis coordinate and a comparison center vertical axis coordinate;
determining the target center coordinate of the target image in the reserved portrait area according to the center coordinate of the contrast image of the first shooting object in the first image, and the method comprises the following steps:
obtaining a target center horizontal axis coordinate of the target image in the first image according to the comparison center horizontal axis coordinate, a central point horizontal axis distance value between the designated portrait area and the reserved portrait area, a height value of the designated portrait area, a height value of the comparison image and a first preset formula; wherein the first preset formula is as follows:
Figure FDA0003161155530000021
x1is the central abscissa, x, of the target2The horizontal axis coordinate of the comparison center, w is the horizontal axis distance value of the center point of the designated portrait area and the reserved portrait area, H is the height value of the comparison image, and H is the height value of the designated portrait area;
obtaining the target central longitudinal axis coordinate of the target image in the first image according to the comparison central longitudinal axis coordinate, the height value of the comparison image, the height value of the first shooting object, the height value of the second shooting object and a second preset formula; wherein the second preset formula is as follows:
y1=y2-0.5×H×(H2-H1),
y1is the target central longitudinal axis coordinate, y2Is the coordinate of the longitudinal axis of the contrast center, H is the height value of the contrast image, H2Is a height value of the first photographic subject, H1The height value of the second shooting object is obtained;
the fitting the target object in a first image according to the target center coordinate and the target height value to obtain the fitted image includes:
obtaining a target height value of the target image in the first image according to the height value of the comparison image, the height value of the first shooting object, the height value of the second shooting object and a third preset formula; wherein the third preset formula is as follows:
H′=H×(H2/H1)
h' is the target height value.
5. The intelligent group photo method according to claim 1, wherein the method further comprises, before the step of superimposing a portrait template on a corresponding area of the shooting preview interface in response to a first operation instruction of a user on the mobile terminal:
identifying and processing the scene to be photographed in the shooting preview interface to obtain scene type information;
and screening the portrait template from a preset portrait template library according to the scene type information and the preset figure body type information.
6. The intelligent group photo method according to claim 1, wherein before the step of obtaining the first image captured by the mobile terminal when the photographing gesture of the first photographic object in the photographing preview interface matches the designated portrait area, the method further comprises:
and when the photographing gesture of the first photographing object in the photographing preview interface is not matched with the designated portrait area, responding to a third operation instruction of the user on the mobile terminal, and changing the display parameters of the portrait template in the photographing preview interface.
7. The intelligent group photo method according to claim 1, wherein the step of acquiring the second image in response to the second operation instruction of the user on the mobile terminal comprises:
and responding to a shooting instruction of a user on the mobile terminal, and acquiring a second image shot by the mobile terminal when the shooting posture of a second shooting object in the shooting preview interface is matched with the reserved portrait area.
8. An intelligent group photo device, comprising:
the interface display module is used for generating a shooting preview interface of the mobile terminal when detecting that a camera of the mobile terminal is started;
the superposition display module is used for responding to a first operation instruction of a user on the mobile terminal and superposing and displaying a portrait template in a corresponding area of the shooting preview interface; the portrait template comprises a designated portrait area and a reserved portrait area;
the image shooting module is used for acquiring a first image shot by the mobile terminal when the shooting posture of a first shooting object in the shooting preview interface is matched with the designated portrait area;
the image acquisition module is used for responding to a second operation instruction of the user on the mobile terminal and acquiring a second image; the photographing posture of a second photographing object in the second image is matched with the reserved portrait area;
the image fusion module is used for fusing a target image in the second image into the first image according to the portrait template so as to generate a co-shooting image; wherein the target image is a person image of a second photographic subject extracted from the second image.
9. A mobile terminal, characterized in that it comprises a memory, a processor and a computer program stored on said memory and executable on said processor, said computer program being configured to implement the steps of the method according to any one of claims 1 to 7.
10. A computer program product comprising executable program code, wherein the program code, when executed by a processor, performs the method of any of claims 1 to 7.
CN202110792916.5A 2021-07-13 2021-07-13 Intelligent group photo method, device, mobile terminal and computer program product Pending CN113596323A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110792916.5A CN113596323A (en) 2021-07-13 2021-07-13 Intelligent group photo method, device, mobile terminal and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110792916.5A CN113596323A (en) 2021-07-13 2021-07-13 Intelligent group photo method, device, mobile terminal and computer program product

Publications (1)

Publication Number Publication Date
CN113596323A true CN113596323A (en) 2021-11-02

Family

ID=78247309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110792916.5A Pending CN113596323A (en) 2021-07-13 2021-07-13 Intelligent group photo method, device, mobile terminal and computer program product

Country Status (1)

Country Link
CN (1) CN113596323A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549974A (en) * 2022-01-26 2022-05-27 西宁城市职业技术学院 Interaction method of multiple intelligent devices based on user
CN116012258A (en) * 2023-02-14 2023-04-25 山东大学 Image harmony method based on cyclic generation countermeasure network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578028A (en) * 2015-07-28 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Photographing method and terminal
CN105872381A (en) * 2016-04-29 2016-08-17 潘成军 Interesting image shooting method
US20170041549A1 (en) * 2015-08-03 2017-02-09 Lg Electronics Inc. Mobile terminal and method for controlling the same
CN110351495A (en) * 2018-04-08 2019-10-18 中兴通讯股份有限公司 A kind of method, apparatus, equipment and the storage medium of mobile terminal shooting group photo
CN110602396A (en) * 2019-09-11 2019-12-20 腾讯科技(深圳)有限公司 Intelligent group photo method and device, electronic equipment and storage medium
WO2020029306A1 (en) * 2018-08-10 2020-02-13 华为技术有限公司 Image capture method and electronic device
WO2021135601A1 (en) * 2019-12-31 2021-07-08 华为技术有限公司 Auxiliary photographing method and apparatus, terminal device, and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578028A (en) * 2015-07-28 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Photographing method and terminal
US20170041549A1 (en) * 2015-08-03 2017-02-09 Lg Electronics Inc. Mobile terminal and method for controlling the same
CN105872381A (en) * 2016-04-29 2016-08-17 潘成军 Interesting image shooting method
CN110351495A (en) * 2018-04-08 2019-10-18 中兴通讯股份有限公司 A kind of method, apparatus, equipment and the storage medium of mobile terminal shooting group photo
WO2020029306A1 (en) * 2018-08-10 2020-02-13 华为技术有限公司 Image capture method and electronic device
CN110602396A (en) * 2019-09-11 2019-12-20 腾讯科技(深圳)有限公司 Intelligent group photo method and device, electronic equipment and storage medium
WO2021135601A1 (en) * 2019-12-31 2021-07-08 华为技术有限公司 Auxiliary photographing method and apparatus, terminal device, and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549974A (en) * 2022-01-26 2022-05-27 西宁城市职业技术学院 Interaction method of multiple intelligent devices based on user
CN116012258A (en) * 2023-02-14 2023-04-25 山东大学 Image harmony method based on cyclic generation countermeasure network
CN116012258B (en) * 2023-02-14 2023-10-13 山东大学 Image harmony method based on cyclic generation countermeasure network

Similar Documents

Publication Publication Date Title
CN110929651B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110210571B (en) Image recognition method and device, computer equipment and computer readable storage medium
CN111541907B (en) Article display method, apparatus, device and storage medium
CN110097576B (en) Motion information determination method of image feature point, task execution method and equipment
CN109815150B (en) Application testing method and device, electronic equipment and storage medium
JP7387202B2 (en) 3D face model generation method, apparatus, computer device and computer program
US11030733B2 (en) Method, electronic device and storage medium for processing image
CN108762501B (en) AR display method, intelligent terminal, AR device and AR system
CN111324250B (en) Three-dimensional image adjusting method, device and equipment and readable storage medium
CN112287852B (en) Face image processing method, face image display method, face image processing device and face image display equipment
JP2021520577A (en) Image processing methods and devices, electronic devices and storage media
CN113228688B (en) System and method for creating wallpaper images on a computing device
CN113596323A (en) Intelligent group photo method, device, mobile terminal and computer program product
CN112532881B (en) Image processing method and device and electronic equipment
CN110570460A (en) Target tracking method and device, computer equipment and computer readable storage medium
CN112581571B (en) Control method and device for virtual image model, electronic equipment and storage medium
CN111680758B (en) Image training sample generation method and device
CN114170349A (en) Image generation method, image generation device, electronic equipment and storage medium
CN110599593A (en) Data synthesis method, device, equipment and storage medium
CN108632543A (en) Method for displaying image, device, storage medium and electronic equipment
CN110807769B (en) Image display control method and device
CN112308103B (en) Method and device for generating training samples
CN115760931A (en) Image processing method and electronic device
CN110312144B (en) Live broadcast method, device, terminal and storage medium
CN112257594A (en) Multimedia data display method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211102