CN107230240B - Shooting method and mobile terminal - Google Patents

Shooting method and mobile terminal Download PDF

Info

Publication number
CN107230240B
CN107230240B CN201710369887.5A CN201710369887A CN107230240B CN 107230240 B CN107230240 B CN 107230240B CN 201710369887 A CN201710369887 A CN 201710369887A CN 107230240 B CN107230240 B CN 107230240B
Authority
CN
China
Prior art keywords
image
library
mobile terminal
target object
photo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710369887.5A
Other languages
Chinese (zh)
Other versions
CN107230240A (en
Inventor
李德健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201710369887.5A priority Critical patent/CN107230240B/en
Publication of CN107230240A publication Critical patent/CN107230240A/en
Application granted granted Critical
Publication of CN107230240B publication Critical patent/CN107230240B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

The invention discloses a shooting method and a mobile terminal, wherein the method comprises the following steps: detecting whether an object in an object library meets a photo-combination condition or not according to a pre-established object library; sending an image acquisition request to a target object, wherein the target object is an object which does not meet the co-illumination condition in the object library; receiving the image sent by the target object, and combining the image into a preview image; and generating the picture. According to the invention, the object library is established in advance, whether the objects in the object library meet the group photo condition is detected, members not in the group photo site can be intelligently identified, the user is automatically associated, the image acquisition request is sent, the image acquisition request is requested to be provided and is merged into the preview image of the group photo site, so that remote real-time intelligent group photo is really realized, the regret that the members not in the field cannot participate in the group photo is avoided, and the group photo experience of the user during gathering is improved.

Description

Shooting method and mobile terminal
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a shooting method and a mobile terminal.
Background
Along with the higher and higher intelligent degree of the mobile phone, the pixels of the mobile phone camera are also higher and higher, and the user can meet the shooting requirement of daily life by using the mobile phone to shoot. At present, for the occasion of small-sized gathering, a smart phone is usually used for shooting all members on the scene simultaneously during group photo, and then the group photo is output, and the members who fail to be on the scene for various reasons cannot participate in the group photo.
One solution in the prior art is to embed photos of newly added people into group photo photos through a PS technology if group photo needs to be added, so as to solve the problem that relatives and friends can not participate in group photo for personal reasons and leave regret. However, the scheme has strong hysteresis, and the problem of difficult embedding caused by the shooting of the unreserved position in the late embedding process results in poor user experience.
Disclosure of Invention
In view of this, the present invention provides a shooting method and a mobile terminal to overcome the defects in the prior art, so as to solve the problem of poor user experience caused by the formation of group photo by embedding a portrait in the middle and later stages in the prior art.
The embodiment of the invention provides a shooting method, which is applied to a mobile terminal and comprises the following steps:
detecting whether an object in an object library meets a photo-combination condition or not according to a pre-established object library;
sending an image acquisition request to a target object, wherein the target object is an object which does not meet the co-illumination condition in the object library;
receiving the image sent by the target object, and combining the image into a preview image;
and generating the picture.
Accordingly, an embodiment of the present invention further provides a mobile terminal, including:
the detection module is used for detecting whether the object in the object library meets the co-illumination condition or not according to a pre-established object library;
the request module is used for sending an image acquisition request to a target object, wherein the target object is an object which does not meet the photographic condition in the object library;
the receiving module is used for receiving the image sent by the target object and combining the image into the preview image;
and the generating module is used for generating the picture.
In the embodiment of the invention, the object library is established in advance, whether the object in the object library meets the group photo condition is detected, the members not in the group photo site can be intelligently identified, the user is automatically associated, the image acquisition request is sent, the image acquisition request is requested to provide the photos and the photos are merged into the preview image of the group photo site, so that the remote real-time intelligent group photo is really realized, the regret that the members not in the site cannot participate in the group photo is avoided, and the group photo experience of the user during the party gathering is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart of a shooting method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for detecting whether an object in an object library satisfies a lighting condition according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a second embodiment of the present invention, wherein the flowchart is used to detect whether an object in an object library satisfies a lighting condition;
fig. 4 is a schematic structural diagram of a mobile terminal according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of a mobile terminal according to a fourth embodiment of the present invention;
fig. 6 is a schematic structural diagram of a mobile terminal according to a fifth embodiment of the present invention;
fig. 7 is a schematic structural diagram of a mobile terminal according to a sixth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
An embodiment of the present invention provides a shooting method, which is applied to a mobile terminal, and as shown in fig. 1, the shooting method includes the following steps:
s100, detecting whether an object in an object library meets a co-illumination condition or not according to a pre-established object library;
s200, sending an image acquisition request to a target object, wherein the target object is an object which does not meet the co-illumination condition in the object library;
s300, receiving the image sent by the target object, and combining the image into a preview image;
and S400, generating a picture.
The shooting method provided by the embodiment can intelligently identify the members who are not in the group photo site by establishing the object library in advance and detecting whether the objects in the object library meet the group photo condition, automatically associate the users, send the image acquisition request, request the users to provide the photos and combine the photos into the preview images of the group photo site, thereby really realizing remote real-time intelligent group photo, avoiding the regret that the non-existing members cannot participate in the group photo, and improving the group photo experience when the users meet.
In the present invention, the object library may be a group photo which is previously created and includes all objects which are desired to join the group photo. In the group photo, an object that is detected to be in the group photo but not in the group photo site is identified as a target object, and an image is requested for the target object. The image sent by the target object can be a pre-stored recent photo, and can also be a real-time self-timer or video screenshot.
Preferably, as shown in fig. 2, the step of detecting whether the object in the object library satisfies the lighting condition according to the pre-established object library in S100 includes:
s101, identifying an object in a preview image of a camera;
s102, according to a pre-established object library, acquiring objects in the object library except the objects in the preview image.
In the embodiment, the user who is not present is mainly determined by the image recognition technology, and objects other than the preview image are recognized by comparing the objects recognized in the preview image with the objects in the object library and are determined as the objects which are not in the group scene.
Preferably, the object is face information, and the object library stores a plurality of pieces of face information.
Accordingly, as a specific implementation manner, the step of S100 detecting whether the object in the object library satisfies the lighting condition includes:
acquiring face information in a preview image of a camera;
and identifying the face which is not in the preview picture in the face library through face information comparison according to a pre-established face library.
Specifically, taking the example of creating a group photo, a face library is created, and the process of comparing face information may be implemented in the following manner:
the members in the group photo actively provide pictures containing face information to the shooting terminal equipment for face recognition;
or, the members in the group photo authorizes to access the pictures containing the face information on the terminal equipment of the members for face recognition;
or, acquiring a picture of the member including the face information by analyzing the group head image of the group photo for face recognition.
Preferably, the step of sending an image acquisition request to the target object at S200 includes:
establishing communication connection with a target object;
requesting the target object to send an image, wherein the image comprises a real-time self-shot photo or a pre-stored photo.
The step of establishing communication connection with the target object includes, but is not limited to, actively contacting the target object by telephone, short message, WeChat and the like.
Preferably, the step of generating a picture in S400 includes:
receiving an adjusting instruction;
the position and size of the image incorporated into the preview image are adjusted.
When the images of the users who are not in the scene are merged and embedded into the preview image of the group photo in real time, after the shooting terminal receives the adjusting instruction, the images of the users who are not in the scene are adjusted to a proper position, and the size, the color and the like of the embedded photos are adjusted, so that the group photo containing the users who are not in the scene is obtained, and the group photo experience of the users is greatly improved.
A specific implementation of the shooting method provided in this embodiment is as follows, and in the following description of the implementation, a user a is used as a user away from the scene, and a mobile phone of a user B is used as a shooting terminal for group photo.
The specific process is as follows:
b, a mobile phone of a user creates a group photo and pulls all group photo personnel to join the group photo;
party members provide personal photos or update actual head portraits or authorize the mobile phone of the creator B user to access and analyze the photos;
b, opening a camera of the mobile phone by a user, and entering an intelligent close-shooting mode;
b, the mobile phone of the user enters a shooting preview interface, and identifies a head portrait of the preview interface;
b, the mobile phone of the user searches for the members which are judged not to be on the spot through comparison of the photos provided by each member in a preview interface, and the users which are not matched with the photos in the group are identified mainly through comparison of head portrait information;
b, the mobile phone of the user A identifies that the user A is absent, a command is started to associate the mobile phone of the user A, and the user A can select to send the existing photo or enter a preview photo mode to transmit the photo in real time after allowing association;
b, embedding the extracted effective image of the user A into a shooting preview interface by the mobile phone of the user B;
the user B can manually adjust or automatically adjust the effective image of the user A to a proper position of the preview interface by a mobile phone;
and B, displaying the group photo picture including the embedded image of the user A by the mobile phone of the user B, and finishing group photo shooting.
Aiming at the situation that the off-site users cannot participate in group photo, the off-site users in the preview interface are mainly identified through an image identification technology, and real-time images are automatically requested from the off-site users, so that the remote group photo of the users is realized.
Example two
The second embodiment of the present invention provides another shooting method applied to a mobile terminal, where the shooting method includes the following steps:
s100, detecting whether an object in an object library meets a co-illumination condition or not according to a pre-established object library;
s200, sending an image acquisition request to a target object, wherein the target object is an object which does not meet the co-illumination condition in the object library;
s300, receiving the image sent by the target object, and combining the image into a preview image;
and S400, generating a picture.
As shown in fig. 3, the step of S100 detecting whether the object in the object library satisfies the lighting condition includes:
s103, acquiring the position of the mobile terminal;
s104, detecting the distance between a first mobile terminal corresponding to each object in an object library and the mobile terminal;
and S105, when the distance between the first mobile terminal corresponding to a certain object and the mobile terminal is greater than the preset distance, judging that the object does not meet the lighting condition.
According to the embodiment, the members not in the field are identified through distance detection, after the group photo preview interface is entered, the distance between each member and the shooting terminal is detected, the users with longer distances are identified as the users not in the field, and the users are automatically associated to initiate the photo request, so that the remote real-time group photo is realized.
Taking a mobile phone as an example, existing smart phones all have a positioning function, and can acquire specific location information of the mobile phone through a GPS, a WIFI, or a data network.
Preferably, the step of sending an image acquisition request to the target object at S200 includes:
establishing communication connection with a target object;
requesting the target object to send an image, wherein the image comprises a real-time self-shot photo or a pre-stored photo.
The step of establishing communication connection with the target object includes, but is not limited to, actively contacting the target object by telephone, short message, WeChat and the like.
Preferably, the step of generating a picture in S400 includes:
receiving an adjusting instruction;
the position and size of the image incorporated into the preview image are adjusted.
S200 to S400 are similar to the first embodiment, and reference may be made to the first embodiment, which is not described herein again.
A specific implementation of the shooting method provided in this embodiment is as follows, and in the following description of the implementation, a user a is used as a user away from the scene, and a mobile phone of a user B is used as a shooting terminal for group photo.
The specific process is as follows:
b, a user mobile phone creates a group of group photos and pulls in all group photo personnel to join the group photos;
b, opening a camera of the mobile phone by a user, and entering an intelligent close-shooting mode;
b, the mobile phone of the user acquires the position information of the mobile phones of other users;
b, comparing the position information by the mobile phone of the user B, and determining the distance between each member and the mobile phone of the user B;
b, the mobile phone of the user B identifies that the distance of the user A exceeds a preset distance according to the distance information, the user A can be judged to be absent, an instruction is started to associate the mobile phone of the user A, and the user A can select to send the existing photo or enter a preview photo mode to transmit the photo in real time after allowing the association;
b, embedding the extracted effective image of the user A into a shooting preview interface by the mobile phone of the user B;
the user B can manually adjust or automatically adjust the effective image of the user A to a proper position of the preview interface by a mobile phone;
and B, displaying the group photo picture including the embedded image of the user A by the mobile phone of the user B, and finishing group photo shooting.
According to the method and the system, for the situation that the off-site users cannot participate in group photo, the off-site users are identified by detecting the distance between each member and the shooting terminal, and the off-site users are automatically requested for real-time images, so that the remote group photo of the users is realized.
EXAMPLE III
A third embodiment of the present invention provides a mobile terminal, as shown in fig. 4, where the mobile terminal 100 includes:
the system comprises a detection module 101, a storage module and a processing module, wherein the detection module 101 is used for detecting whether an object in an object library meets a photo-combination condition according to the pre-established object library;
a request module 102, configured to send an image acquisition request to a target object, where the target object is an object that does not satisfy the lighting condition in the object library;
a receiving module 103, configured to receive an image sent by the target object, and combine the image into a preview image;
and a generating module 104 for generating the picture.
The mobile terminal provided by the embodiment can intelligently identify members who are not in a group photo site by pre-establishing an object library and detecting whether the objects in the object library meet a group photo condition, automatically associate the user, send an image acquisition request, request the user to provide a photo and combine the photo into a preview image of the group photo site, thereby really realizing remote real-time intelligent group photo, avoiding the regret that the non-present members cannot participate in the group photo, and improving the group photo experience when the user gathers.
Preferably, the detection module 101 comprises:
an identifying unit 1011 for identifying an object in a preview image of the camera;
a first obtaining unit 1012, configured to obtain, according to a pre-established object library, an object in the object library except for the object in the preview image.
Preferably, the object is face information, and the object library stores a plurality of pieces of face information.
Accordingly, as a specific implementation manner, the detection module 101 includes:
the second acquisition unit is used for acquiring the face information in the preview image of the camera;
and the comparison unit is used for identifying the face which is not in the preview image in the face library through face information comparison according to a pre-established face library.
Further, the request module 102 includes:
the communication connection unit is used for establishing communication connection with the target object;
and the image request unit is used for requesting the target object to send an image, wherein the image comprises a real-time self-shot photo or a pre-stored photo.
Further, the generating module 104 includes:
a receiving unit for receiving an adjustment instruction;
and an adjusting unit for adjusting the position and size of the image incorporated into the preview image.
The mobile terminal 100 can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 and fig. 2, and is not described herein again to avoid repetition.
Compared with the prior art, the mobile terminal provided by the embodiment mainly identifies the absent user in the preview interface through the image identification technology aiming at the condition that the absent user cannot participate in group shooting, and automatically requests the absent user for a real-time image to realize remote group shooting of the user.
Example four
A fourth embodiment of the present invention provides a mobile terminal, as shown in fig. 5, where the mobile terminal 200 includes:
the detection module 201 is configured to detect whether an object in an object library meets a co-illumination condition according to a pre-established object library;
a request module 202, configured to send an image acquisition request to a target object, where the target object is an object that does not satisfy the lighting condition in the object library;
a receiving module 203, configured to receive an image sent by the target object, and combine the image into a preview image;
and a generating module 204 for generating a picture.
The mobile terminal provided by the embodiment can intelligently identify members who are not in a group photo site by pre-establishing an object library and detecting whether the objects in the object library meet a group photo condition, automatically associate the user, send an image acquisition request, request the user to provide a photo and combine the photo into a preview image of the group photo site, thereby really realizing remote real-time intelligent group photo, avoiding the regret that the non-present members cannot participate in the group photo, and improving the group photo experience when the user gathers.
Preferably, the detection module 201 comprises:
a third acquiring unit 2011, configured to acquire a location of the mobile terminal;
a distance detecting unit 2012, configured to detect a distance between a first mobile terminal corresponding to each object in the object library and the mobile terminal;
the judging unit 2013 is configured to judge that a certain object does not meet the lighting condition when the distance between a first mobile terminal corresponding to the object and the mobile terminal is greater than a preset distance.
Further, the request module 102 includes:
the communication connection unit is used for establishing communication connection with the target object;
and the image request unit is used for requesting the target object to send an image, wherein the image comprises a real-time self-shot photo or a pre-stored photo.
Further, the generating module 104 includes:
a receiving unit for receiving an adjustment instruction;
and an adjusting unit for adjusting the position and size of the image incorporated into the preview image.
The mobile terminal 200 can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 and fig. 3, and is not described herein again to avoid repetition.
The mobile terminal provided by the embodiment identifies the absent user by detecting the distance between each member and the shooting terminal aiming at the condition that the absent user cannot participate in group shooting, and automatically requests the absent user for a real-time image to realize remote group shooting of the user.
EXAMPLE five
Fig. 6 is a block diagram of a mobile terminal according to another embodiment of the present invention. The mobile terminal 700 shown in fig. 6 includes: at least one processor 701, memory 702, at least one network interface 704, and other user interfaces 703. The various components in the mobile terminal 700 are coupled together by a bus system 705. It is understood that the bus system 705 is used to enable communications among the components. The bus system 705 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various busses are labeled in figure 6 as the bus system 705.
The user interface 703 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.
It is to be understood that the memory 702 in embodiments of the present invention may be either volatile memory or non-volatile memory, or may include both volatile and non-volatile memory, wherein non-volatile memory may be Read-only memory (ROM), programmable Read-only memory (programmable ROM), erasable programmable Read-only memory (EPROM ), electrically erasable programmable Read-only memory (EEPROM), or flash memory volatile memory may be Random Access Memory (RAM), which serves as external cache memory, by way of example but not limitation, many forms of RAM are available, such as static random access memory (staticiram, SRAM), dynamic random access memory (dynamicdram, SDRAM), synchronous dynamic random access memory (syncronous, SDRAM), double data rate synchronous dynamic random access memory (doubtatatare SDRAM, ddrsrssrem), Enhanced synchronous dynamic random access memory (Enhanced DRAM, Enhanced SDRAM), synchronous DRAM, or SDRAM 35353531, or any other suitable types of RAM for accessing the present invention, including, direct access DRAM, and flash RAM.
In some embodiments, memory 702 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof: an operating system 7021 and application programs 7022.
The operating system 7021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 7022 includes various applications, such as a media player (MediaPlayer), a Browser (Browser), and the like, for implementing various application services. Programs that implement methods in accordance with embodiments of the present invention can be included within application program 7022.
In the embodiment of the present invention, the processor 701 is configured to, by calling a program or an instruction stored in the memory 702, specifically, a program or an instruction stored in the application 7022:
detecting whether an object in an object library meets a photo-combination condition or not according to a pre-established object library; sending an image acquisition request to a target object, wherein the target object is an object which does not meet the co-illumination condition in the object library; receiving the image sent by the target object, and combining the image into a preview image; and generating the picture.
The method disclosed in the above embodiments of the present invention may be applied to the processor 701, or implemented by the processor 701. The processor 701 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 701. The processor 701 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 702, and the processor 701 reads the information in the memory 702 and performs the steps of the above method in combination with the hardware thereof.
For a hardware implementation, the processing units may be implemented in one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable logic devices (P L D), Field-Programmable gate arrays (FPGAs), general purpose processors, controllers, microcontrollers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described in this disclosure may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described in this disclosure. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Preferably, the processor 701 is further configured to:
in the step of detecting whether the object in the object library meets the photographic condition according to the pre-established object library:
identifying an object in a preview image of a camera; and acquiring objects in the object library except the objects in the preview image according to a pre-established object library.
Preferably, the object is face information, and the object library stores a plurality of pieces of face information.
Preferably, the step of detecting whether the objects in the object library meet the lighting condition includes:
acquiring face information in a preview image of a camera; and identifying the face which is not in the preview picture in the face library through face information comparison according to a pre-established face library.
Preferably, the processor 701 is further configured to:
in the step of detecting whether the object in the object library meets the photographic condition according to the pre-established object library:
acquiring the position of a mobile terminal; detecting the distance between a first mobile terminal corresponding to each object in an object library and the mobile terminal; and when the distance between a first mobile terminal corresponding to a certain object and the mobile terminal is greater than a preset distance, judging that the object does not meet the lighting condition.
Preferably, the processor 701 is further configured to:
in the step of sending an image acquisition request to the target object:
establishing communication connection with a target object; requesting the target object to send an image, wherein the image comprises a real-time self-shot photo or a pre-stored photo.
Preferably, the processor 701 is further configured to:
in the step of generating the picture:
receiving an adjusting instruction;
the position and size of the image incorporated into the preview image are adjusted.
The mobile terminal 700 can implement the processes implemented by the mobile terminal in the foregoing embodiments, and details are not repeated here to avoid repetition.
The mobile terminal provided by the embodiment can intelligently identify members who are not in a group photo site by pre-establishing an object library and detecting whether the objects in the object library meet a group photo condition, automatically associate the user, send an image acquisition request, request the user to provide a photo and combine the photo into a preview image of the group photo site, thereby really realizing remote real-time intelligent group photo, avoiding the regret that the non-present members cannot participate in the group photo, and improving the group photo experience when the user gathers.
Fig. 7 is a schematic structural diagram of a mobile terminal according to another embodiment of the present invention. Specifically, the mobile terminal 800 in fig. 7 may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), or a vehicle-mounted computer.
The mobile terminal 800 in fig. 7 includes a Radio Frequency (RF) circuit 810, a memory 820, an input unit 830, a display unit 840, a processor 860, an audio circuit 870, a wifi (wirelessfidelity) module 880, and a power supply 890.
The input unit 830 may be used, among other things, to receive numeric or character information input by a user and to generate signal inputs related to user settings and function control of the mobile terminal 800. Specifically, in the embodiment of the present invention, the input unit 830 may include a touch panel 831. The touch panel 831, also referred to as a touch screen, can collect touch operations performed by a user on or near the touch panel 831 (e.g., operations performed by the user on the touch panel 831 using a finger, a stylus, or any other suitable object or accessory), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 831 may include two portions, i.e., a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 860, and can receive and execute commands sent by the processor 860. In addition, the touch panel 831 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 831, the input unit 830 may include other input devices 832, and the other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 840 may include a display panel 841, and the display panel 841 may be configured in the form of L CD or an Organic light emitting diode (O L ED), or the like, optionally.
It should be noted that the touch panel 831 can overlay the display panel 841 to form a touch display screen, which, when it detects a touch operation thereon or nearby, is passed to the processor 860 to determine the type of touch event, and then the processor 860 provides a corresponding visual output on the touch display screen according to the type of touch event.
The touch display screen comprises an application program interface display area and a common control display area. The arrangement modes of the application program interface display area and the common control display area are not limited, and can be an arrangement mode which can distinguish two display areas, such as vertical arrangement, left-right arrangement and the like. The application interface display area may be used to display an interface of an application. Each interface may contain at least one interface element such as an icon and/or widget desktop control for an application. The application interface display area may also be an empty interface that does not contain any content. The common control display area is used for displaying controls with high utilization rate, such as application icons like setting buttons, interface numbers, scroll bars, phone book icons and the like.
The processor 860 is a control center of the mobile terminal 800, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile terminal 800 and processes data by operating or executing software programs and/or modules stored in the first memory 821 and calling data stored in the second memory 822, thereby integrally monitoring the mobile terminal 800. Optionally, processor 860 may include one or more processing units.
In an embodiment of the present invention, the processor 860 is configured to, by invoking software programs and/or modules stored in the first memory 821 and/or data stored in the second memory 822:
detecting whether an object in an object library meets a photo-combination condition or not according to a pre-established object library; sending an image acquisition request to a target object, wherein the target object is an object which does not meet the co-illumination condition in the object library; receiving the image sent by the target object, and combining the image into a preview image; and generating the picture.
Preferably, the processor 860 is further configured to:
in the step of detecting whether the object in the object library meets the photographic condition according to the pre-established object library:
identifying an object in a preview image of a camera; and acquiring objects in the object library except the objects in the preview image according to a pre-established object library.
Preferably, the object is face information, and the object library stores a plurality of pieces of face information.
Preferably, the step of detecting whether the objects in the object library meet the lighting condition includes:
acquiring face information in a preview image of a camera; and identifying the face which is not in the preview picture in the face library through face information comparison according to a pre-established face library.
Preferably, the processor 860 is further configured to:
in the step of detecting whether the object in the object library meets the photographic condition according to the pre-established object library:
acquiring the position of a mobile terminal; detecting the distance between a first mobile terminal corresponding to each object in an object library and the mobile terminal; and when the distance between a first mobile terminal corresponding to a certain object and the mobile terminal is greater than a preset distance, judging that the object does not meet the lighting condition.
Preferably, the processor 860 is further configured to:
in the step of sending an image acquisition request to the target object:
establishing communication connection with a target object; requesting the target object to send an image, wherein the image comprises a real-time self-shot photo or a pre-stored photo.
Preferably, the processor 860 is further configured to:
in the step of generating the picture:
receiving an adjusting instruction;
the position and size of the image incorporated into the preview image are adjusted.
As can be seen, the mobile terminal provided in this embodiment can intelligently identify members who are not in the group photo site by pre-establishing an object library and detecting whether the objects in the object library satisfy the group photo condition, and automatically associate the user, send an image acquisition request requesting the user to provide a photo and combine the photo into a preview image of the group photo site, thereby really realizing remote real-time intelligent group photo, avoiding the regret that the members who are not in the field cannot participate in the group photo, and improving the group photo experience when the user meets a meeting.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A shooting method is applied to a mobile terminal, and is characterized by comprising the following steps:
detecting whether an object in an object library meets a photo-combination condition or not according to a pre-established object library;
sending an image acquisition request to a target object, wherein the target object is an object which does not meet the co-illumination condition in the object library;
receiving the image sent by the target object, and combining the image into a preview image;
generating a picture;
the step of detecting whether the objects in the object library meet the lighting conditions comprises:
acquiring the position of a mobile terminal;
detecting the distance between a first mobile terminal corresponding to each object in an object library and the mobile terminal;
and when the distance between a first mobile terminal corresponding to a certain object and the mobile terminal is greater than a preset distance, judging that the object does not meet the lighting condition.
2. The photographing method according to claim 1, wherein the step of detecting whether the subject in the subject library satisfies the lighting condition according to a pre-established subject library comprises:
identifying an object in a preview image of a camera;
and acquiring objects in the object library except the objects in the preview image according to a pre-established object library.
3. The photographing method according to claim 1, wherein the object is face information, and the object library stores a plurality of pieces of face information.
4. The photographing method according to claim 3, wherein the step of detecting whether the subject in the subject library satisfies a lighting condition includes:
acquiring face information in a preview image of a camera;
and identifying the face which is not in the preview picture in the face library through face information comparison according to a pre-established face library.
5. The photographing method according to any one of claims 1 to 4, wherein the step of sending an image acquisition request to a target object includes:
establishing communication connection with a target object;
requesting the target object to send an image, wherein the image comprises a real-time self-shot photo or a pre-stored photo.
6. The photographing method according to any one of claims 1 to 4, wherein the step of generating a picture includes:
receiving an adjusting instruction;
the position and size of the image incorporated into the preview image are adjusted.
7. A mobile terminal, comprising:
the detection module is used for detecting whether the object in the object library meets the co-illumination condition or not according to a pre-established object library;
the request module is used for sending an image acquisition request to a target object, wherein the target object is an object which does not meet the photographic condition in the object library;
the receiving module is used for receiving the image sent by the target object and combining the image into the preview image;
the generating module is used for generating pictures;
the detection module comprises:
a third obtaining unit, configured to obtain a location of the mobile terminal;
the distance detection unit is used for detecting the distance between a first mobile terminal corresponding to each object in the object library and the mobile terminal;
the judging unit is used for judging that the object does not meet the lighting condition when the distance between the first mobile terminal corresponding to the object and the mobile terminal is larger than the preset distance.
8. The mobile terminal of claim 7, wherein the detection module comprises:
an identifying unit for identifying an object in a preview image of the camera;
and the first acquisition unit is used for acquiring objects in the object library except the objects in the preview image according to a pre-established object library.
9. The mobile terminal according to claim 7, wherein the object is face information, and the object library stores a plurality of pieces of face information.
10. The mobile terminal of claim 9, wherein the detection module comprises:
the second acquisition unit is used for acquiring the face information in the preview image of the camera;
and the comparison unit is used for identifying the face which is not in the preview image in the face library through face information comparison according to a pre-established face library.
11. The mobile terminal according to any of claims 7 to 10, wherein the requesting module comprises:
the communication connection unit is used for establishing communication connection with the target object;
and the image request unit is used for requesting the target object to send an image, wherein the image comprises a real-time self-shot photo or a pre-stored photo.
12. The mobile terminal according to any of claims 7 to 10, wherein the generating module comprises:
a receiving unit for receiving an adjustment instruction;
and an adjusting unit for adjusting the position and size of the image incorporated into the preview image.
CN201710369887.5A 2017-05-23 2017-05-23 Shooting method and mobile terminal Active CN107230240B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710369887.5A CN107230240B (en) 2017-05-23 2017-05-23 Shooting method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710369887.5A CN107230240B (en) 2017-05-23 2017-05-23 Shooting method and mobile terminal

Publications (2)

Publication Number Publication Date
CN107230240A CN107230240A (en) 2017-10-03
CN107230240B true CN107230240B (en) 2020-08-07

Family

ID=59934626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710369887.5A Active CN107230240B (en) 2017-05-23 2017-05-23 Shooting method and mobile terminal

Country Status (1)

Country Link
CN (1) CN107230240B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110602396B (en) * 2019-09-11 2022-03-22 腾讯科技(深圳)有限公司 Intelligent group photo method and device, electronic equipment and storage medium
CN113473239B (en) * 2020-07-15 2023-10-13 青岛海信电子产业控股股份有限公司 Intelligent terminal, server and image processing method
CN113628292B (en) * 2021-08-16 2023-07-25 上海云轴信息科技有限公司 Method and device for previewing pictures in target terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540068A (en) * 2008-03-21 2009-09-23 上海研祥智能科技有限公司 Device for attendance checking through facial recognition and method thereof
WO2012161557A1 (en) * 2011-05-23 2012-11-29 Majid El Bouazzaoui Electronic device for the identification of persons, animals or objects; the exchange of information or of messages
CN103873809A (en) * 2012-12-14 2014-06-18 联想(北京)有限公司 Image processing method and electronic equipment
CN205405627U (en) * 2016-03-21 2016-07-27 哈尔滨象牙塔科技有限公司 Face identification system of calling roll of passive form colleges and universities
CN106375193A (en) * 2016-09-09 2017-02-01 四川长虹电器股份有限公司 Remote group photographing method
CN106657791A (en) * 2017-01-03 2017-05-10 广东欧珀移动通信有限公司 Method and device for generating synthetic image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540068A (en) * 2008-03-21 2009-09-23 上海研祥智能科技有限公司 Device for attendance checking through facial recognition and method thereof
WO2012161557A1 (en) * 2011-05-23 2012-11-29 Majid El Bouazzaoui Electronic device for the identification of persons, animals or objects; the exchange of information or of messages
CN103873809A (en) * 2012-12-14 2014-06-18 联想(北京)有限公司 Image processing method and electronic equipment
CN205405627U (en) * 2016-03-21 2016-07-27 哈尔滨象牙塔科技有限公司 Face identification system of calling roll of passive form colleges and universities
CN106375193A (en) * 2016-09-09 2017-02-01 四川长虹电器股份有限公司 Remote group photographing method
CN106657791A (en) * 2017-01-03 2017-05-10 广东欧珀移动通信有限公司 Method and device for generating synthetic image

Also Published As

Publication number Publication date
CN107230240A (en) 2017-10-03

Similar Documents

Publication Publication Date Title
CN106406710B (en) Screen recording method and mobile terminal
CN106060406B (en) Photographing method and mobile terminal
EP2701152B1 (en) Media object browsing in a collaborative window, mobile client editing, augmented reality rendering.
CN107678644B (en) Image processing method and mobile terminal
CN107509030B (en) focusing method and mobile terminal
US8537228B2 (en) Electronic device and method for controlling cameras
US10135925B2 (en) Non-transitory computer-readable medium, terminal, and method
CN107172347B (en) Photographing method and terminal
CN107360375B (en) Shooting method and mobile terminal
US9509733B2 (en) Program, communication apparatus and control method
CN105320695A (en) Picture processing method and device
CN107230240B (en) Shooting method and mobile terminal
CN107959789B (en) Image processing method and mobile terminal
EP3260998A1 (en) Method and device for setting profile picture
JP2018093361A (en) Communication terminal, communication system, video output method, and program
CN107480500B (en) Face verification method and mobile terminal
US20230334789A1 (en) Image Processing Method, Mobile Terminal, and Storage Medium
CN106815809B (en) Picture processing method and device
KR20150117820A (en) Method For Displaying Image and An Electronic Device Thereof
CN111553196A (en) Method, system, device and storage medium for detecting hidden camera
CN111125601A (en) File transmission method, device, terminal, server and storage medium
CN107592458B (en) Shooting method and mobile terminal
US20120098967A1 (en) 3d image monitoring system and method implemented by portable electronic device
CN106874787B (en) Image viewing method and mobile terminal
CN106713742B (en) Shooting method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant