CN108848313B - Multi-person photographing method, terminal and storage medium - Google Patents

Multi-person photographing method, terminal and storage medium Download PDF

Info

Publication number
CN108848313B
CN108848313B CN201810912222.9A CN201810912222A CN108848313B CN 108848313 B CN108848313 B CN 108848313B CN 201810912222 A CN201810912222 A CN 201810912222A CN 108848313 B CN108848313 B CN 108848313B
Authority
CN
China
Prior art keywords
target composition
shooting
composition
preview interface
ith
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810912222.9A
Other languages
Chinese (zh)
Other versions
CN108848313A (en
Inventor
冉龙金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810912222.9A priority Critical patent/CN108848313B/en
Publication of CN108848313A publication Critical patent/CN108848313A/en
Application granted granted Critical
Publication of CN108848313B publication Critical patent/CN108848313B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a multi-person photographing method, a terminal and a storage medium, relates to the field of image processing, and aims to solve the problem that a smart phone cannot achieve a good photographing effect. The method comprises the following steps: acquiring N target composition positions; displaying N target composition marks on a shooting preview interface, wherein the N composition marks indicate N target composition positions; executing shooting operation and outputting a target image under the condition that each shooting object in the N shooting objects displayed in the shooting preview interface is located at the corresponding target composition position; wherein N is an integer greater than 1. The invention is used for photographing.

Description

Multi-person photographing method, terminal and storage medium
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to a multi-person photographing method, a terminal and a storage medium.
Background
With the development of technology and the progress of the era, people rarely use a separate camera to take pictures, and a smart phone is used as a camera device. Nowadays, a smart phone has been widely used as a common camera device.
In the related art, when a plurality of users are photographed by a smart phone, the plurality of users are often photographed at a position close to the center of a photographing preview interface of the smart phone. However, this method does not achieve a good photographing effect in many cases.
Disclosure of Invention
The embodiment of the invention provides a multi-person photographing method, a terminal and a storage medium, and aims to solve the problem that a smart phone cannot achieve a good photographing effect.
In a first aspect, a method for photographing multiple persons is provided, including:
acquiring N target composition positions;
displaying N target composition marks on a shooting preview interface, wherein the N composition marks indicate N target composition positions;
executing shooting operation and outputting a target image under the condition that each shooting object in the N shooting objects displayed in the shooting preview interface is located at the corresponding target composition position;
wherein N is an integer greater than 1.
In a second aspect, a terminal is provided, including:
an acquisition module for acquiring N target composition positions;
the display module is used for displaying N target composition marks on a shooting preview interface, wherein the N composition marks indicate N target composition positions;
the processing module is used for executing shooting operation and outputting a target image under the condition that each shooting object in the N shooting objects displayed in the shooting preview interface is located at the corresponding target composition position;
wherein N is an integer greater than 1.
In a third aspect, a terminal is provided comprising a processor and a memory, the memory having stored thereon a computer program implementing the steps of the method according to the first aspect when the computer program is executed by the processor.
In a fourth aspect, there is provided a computer-readable storage medium, such as a non-transitory computer-readable storage medium, having stored thereon a computer program which, when executed, implements the steps of the method according to the first aspect.
In the embodiment of the present invention, the target composition identifier indicating the target composition position is displayed, so that when all the subjects displayed in the shooting preview interface are located at the corresponding target composition positions, the shooting operation can be performed, and the target image is output. In the process, the target composition mark is used as an explicit guide for the position of the target composition, so that the human subjective factors of a photographer are reduced, a better shot image can be generated, and the problem that the smart phone cannot achieve a better shooting effect can be solved.
Drawings
Fig. 1 is a flowchart of a method for photographing a plurality of persons according to an embodiment of the present invention;
FIG. 2 is a flowchart of another method for photographing a plurality of persons according to an embodiment of the present invention;
fig. 3A-3D are schematic diagrams illustrating an effect of a multi-user photographing method according to an embodiment of the invention;
fig. 4 is a block diagram of a terminal according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The multi-person photographing method provided by the embodiment of the invention can be executed by a terminal, and in the embodiment of the invention, the terminal can be a device with photographing and/or shooting functions, such as a mobile phone, a tablet computer, a camera and the like.
The technical solutions provided by the embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for photographing multiple persons according to an embodiment of the present invention. Referring to fig. 1, a method for photographing a plurality of persons according to an embodiment of the present invention may include:
in step 110, N target composition positions are obtained.
Herein, N is an integer greater than 1.
In an embodiment of the present invention, the target composition position may be a position with respect to the photographing preview interface. The target composition position is a kind of reference position. The photographic subject at the target composition position can be regarded as being in a position ready state. In order to facilitate guiding the photographic subject to move to this reference position, a target composition mark may be displayed at a target composition position on the photographic preview interface.
In the embodiment of the present invention, the shooting preview interface may be an interface that shows a picture in a viewfinder of the camera. Namely, in the shooting process, the interface displayed by the image picture collected by the camera of the terminal.
In an embodiment of the present invention, the target composition position may be obtained by: acquiring a preview image acquired by a camera; acquiring the number N of shot objects in the preview image; performing feature recognition on the preview image to obtain the feature of each image element; and determining N target composition positions based on the number N of the shooting objects and the characteristics of each image element.
The image element may be content such as a tree, a road, a sky, etc., displayed in the image. The features of the image elements may be the colors, shades, styles, spatial positional relationships between them, and the like of these image elements.
The embodiment of the invention determines the position of the target composition based on the number of the shot objects and the characteristics of the image elements, and fully considers the specific environment in shooting, so that the obtained position of the target composition can be ensured to be more accurate, the composition is better in shooting, and the current shooting requirement is better met.
And 120, displaying N target composition marks on a shooting preview interface, wherein the N composition marks indicate N target composition positions.
In the embodiment of the invention, once the target composition position is determined, the target composition mark can be displayed at the target composition position to play a guiding role.
The displaying N target composition identifiers on the shooting preview interface in this step may include: displaying an ith target composition mark on a shooting preview interface, wherein the ith composition mark indicates an ith target composition position; determining an i +1 th target composition mark; under the condition that an ith shooting object displayed in a shooting preview interface is located at the ith target composition position, displaying an (i + 1) th target composition mark on the shooting preview interface, wherein the (i + 1) th composition mark indicates the (i + 1) th target composition position; wherein i is an integer greater than 1, and i is not more than N.
In the embodiment of the invention, the current target composition mark (ith target composition mark) is displayed, and when the current shooting object (ith shooting object) is positioned at the target composition position corresponding to the current shooting object, the next target composition mark (i +1 st target composition mark) of the current composition mark can be displayed, so that the target composition marks are displayed one by one, the shooting objects can be guided and positioned independently, the interference of displaying a plurality of target composition marks at one time on the shooting objects is avoided, and the shooting objects can be ensured to be positioned at the target composition position corresponding to the shooting objects quickly.
In the process of determining the (i + 1) th target composition identifier, the display sequence of the composition identifier input by the user can be acquired first, and the (i + 1) th target composition identifier is determined based on the display sequence of the composition identifier. Because the display sequence of the composition marks is determined based on the input of the user, the display of the target composition marks can be ensured to be more flexible, and the requirements of the user can be better met.
In the embodiment of the present invention, when obtaining the display sequence of the composition identifier input by the user, for example, all the target composition identifiers may be displayed first, and the sequence of operations performed by the user on the target composition identifiers is determined (for example, the user clicks the target composition identifier one by one according to the count result), and the display sequence of the composition identifier may be determined based on the sequence of operations performed first and last. Alternatively, the composition identification display order may also be determined in response to a user's designation operation after the preview shooting interface is displayed. For example, if a sliding operation from left to right is detected, the composition mark is displayed from left to right; for example, if a slide operation from right to left is detected, the composition identifier is displayed from right to left.
Optionally, after displaying the ith target composition identifier on the shooting preview interface, displaying prompt information at the ith target composition position when the ith shooting object displayed on the shooting preview interface is located at the ith target composition position, where the prompt information is used to indicate that the composition at the ith target composition position is completed. Wherein the prompt message may include at least one of a text prompt, a voice prompt, and an animation prompt.
According to the embodiment of the invention, the in-position prompt is used, so that a photographer can know that the current shooting object is in position in time, the next shooting object can be in position under the condition that the shooting object exists in the future, and the shooting can be started under the condition that all the shooting objects are in position.
Optionally, before displaying the ith target composition identifier on the shooting preview interface, all target composition identifiers may be displayed on the shooting preview interface, so that a photographer can know the position of each target composition identifier in advance. All target composition identifications may disappear after being displayed for a period of time (e.g., 5 seconds), and only the target composition identifications are subsequently displayed one by one. Of course, if the user finds that the position of one or more target composition identifiers needs to be adjusted while all the target composition identifiers are displayed, a first input operation (e.g., quick click on two other positions on the shooting preview interface) may be performed. The terminal updates a display position of at least one target composition identifier in response to a first input of a user if the first input is received within a preset time period. Therefore, the display position of the target composition mark can be guaranteed to be adjustable, the display position is more flexible, and the user requirements can be better met.
And step 130, executing shooting operation and outputting a target image under the condition that each shooting object in the N shooting objects displayed in the shooting preview interface is located at the corresponding target composition position.
In the embodiment of the present invention, the photographic subject may be a subject photographed by the image pickup device, and the subject may be, for example, a user, an animal, or the like. The multi-photographic subject may refer to a plurality of photographic subjects (i.e., two or more photographic subjects). For example, if 3 persons need to be group-filmed, each person is a photographic subject, and may be an a photographic subject, a B photographic subject, and a C photographic subject in this order.
In the embodiment of the present invention, the image of the photographic subject formed in the photographic preview interface is a subject image of the photographic subject. The subject image of the photographic subject represents an image of the photographic subject itself (including only the photographic subject), does not include anything around the photographic subject, and can be captured from an image formed by photographing the photographic subject with the image pickup device.
The target composition position (which may be regarded as a kind of virtual reference position) may or may not initially coincide with a position (i.e., a real position) of an image formed by a photographic subject in the photographic preview interface. In general, the target composition position does not initially coincide with the position of the image formed by the photographic subject in the photographic preview interface. The shooting object needs to move, or the camera device needs to move, so that the target composition position can be ensured to be overlapped with the position of the image formed by the shooting object in the shooting preview interface. Once the image of the photographic subject in the photographic preview interface is located at the target composition position corresponding thereto, it can be considered that this photographic subject is already in place, i.e., in a better photographic position. At this time, the photographing operation may be performed to output the target image.
In the embodiment of the invention, if a photographic subject is photographed, after the target composition position is determined, whether an image formed by the photographic subject in the photographing preview interface is displayed at the target composition position for the photographic subject can be detected. If a plurality of subjects are photographed, it is possible to detect whether an image formed by the subject in the photographing preview interface is displayed at a target composition position for the subject for each of the plurality of subjects.
In the embodiment of the invention, the image formed by the shooting object in the shooting preview interface is positioned in the target composition position, and the image formed by the shooting object in the shooting preview interface is coincided with the target composition position. In the embodiments of the present invention, coincidence is a relative concept, not an absolute concept. The coincidence of the position of the photographic subject in the photographic preview interface with the target composition position may indicate that the position of the photographic subject in the photographic preview interface mostly coincides with the target composition position, for example, more than a preset percentage (the preset percentage is generally more than 50%, such as 70% or 80%, etc.) coincides. That is, in the present invention, it is not necessarily required that the position of the photographic subject in the shooting preview interface of the image pickup apparatus coincides with the target shooting position by 100%.
In the embodiment of the invention, the target composition mark indicating the target composition position is displayed, so that the shooting operation can be executed and the target image is output under the condition that all the shooting objects displayed in the shooting preview interface are positioned at the corresponding target composition positions, and the user does not need to input the shooting instruction.
It is noted that, when acquiring a target composition position, the embodiments of the present invention may acquire the target composition position based on a composition policy.
In an embodiment of the present invention, a method for obtaining a target composition position includes: acquiring the number of the at least one photographic object; acquiring the characteristics of image elements in a preview image displayed in the shooting preview interface; and selecting a composition strategy from a composition strategy library according to the characteristics of the image elements and the number of the at least one shooting object, and determining a target composition position based on the selected composition strategy. Here, a target composition position for the image element and the number of persons photographed may be specified in the composition policy. The composition policy may be associated with the number of people shot (e.g., 3 people, 5 people, etc.) and the image elements in the shot preview interface (e.g., trees, roads, sky, etc.), and once the number of people shot and the image elements are determined, the corresponding composition policy may be determined accordingly. For example, a plurality of composition policies may be included in the composition policy library, such as a composition policy for 3 persons, a composition policy for 4 persons, and so on. Composition strategies for different image elements also exist in the composition strategy library, for example, a composition strategy for trees, a composition strategy for roads, a composition strategy for sky, and the like. Thus, after the number of people to shoot and the image elements in the shooting preview interface are determined, the composition strategy can be selected based on the intersection of the number of people to shoot and the image elements in the shooting preview interface. For example, a composition policy for trees among the composition policies of three persons is selected, and the like. As can be seen from the above, in the embodiment of the present invention, selecting a composition policy from the composition policy library according to the number of the image elements and the at least one photographic subject may specifically be: selecting a composition policy satisfying both the number of the at least one photographic subject and the image element from a composition policy library. It should be appreciated that in selecting a composition strategy for an image element, the composition strategy including the most image elements may be selected first. For example, if the preview interface includes trees and roads, when the composition strategy is selected, image elements including both trees and roads may be selected first, and the composition strategy including only trees or only roads is not selected first. In the process of selecting the composition strategies, the number of people who shoot can be taken as a priority option, namely, the composition strategies meeting the number of people who shoot are selected preferentially, and the composition strategies meeting the image elements are selected from the composition strategies meeting the number of people who shoot. Of course, in the embodiment of the present invention, the image element may also be used as a priority option, that is, the composition strategies meeting the image element are preferentially selected, and the composition strategies meeting the number of people to be shot are further selected from the composition strategies meeting the image element.
In the embodiment of the present invention, the feature of the image element may also cover a style formed by matching the image elements with each other. Styles may include, for example, classical styles, modern styles, and the like.
In an embodiment of the present invention, the composition policy library may be designed in advance (e.g., by the photographer).
The method for determining the target composition position based on the composition strategy ensures that the selected target composition position can better reflect the actual situation during shooting because the composition strategy fully considers the image elements and the number of the shooting people in the shooting preview interface, thereby ensuring that the selection of the target composition position has certain objectivity, reducing the human subjective factors of the shooting people and generating better shot images.
In the embodiment of the present invention, in the selecting of the composition policy, the composition policy may be further selected according to only one of the number of the image elements and the number of the at least one photographic subject.
It should be understood that the above is only an example, and a specific manner of determining the target composition position based on the composition strategy in the embodiment of the present invention is not limited to the above-listed manner, and only image elements and/or the number of persons in the preview interface need to be considered in determining the target composition position.
In the embodiment of the present invention, the acquiring the number of the at least one shooting object may specifically be: the number of the photographic subjects input by the user is received, and the number of the photographic subjects is obtained according to the number of the photographic subjects. Of course, in the embodiment of the present invention, it is also possible to recognize faces (for example, human faces) of all objects in the shooting preview interface by the image pickup device, and automatically determine the number of shooting objects based on the recognition result.
In this embodiment of the present invention, the obtaining of the image element in the shooting preview interface may specifically be: the captured preview image is acquired, the captured preview image is analyzed (i.e., feature recognition is performed), and image elements are determined therefrom. Of course, in the embodiment of the present invention, the user may also actively input an image element in the shooting preview interface (for example, the user inputs an image element such as a tree, sky, etc.).
In one embodiment of the present invention, the composition position recommendation function may be turned on first (e.g., in case of a user operation), and once the composition position recommendation function is turned on, at least one target composition position may be displayed on the photographing preview interface. That is, in the embodiment of the present invention, the step 110 may be executed when the composition position recommending function is turned on, and when the composition position recommending function is not turned on, a conventional photographing manner is adopted, and at least one target composition position is not displayed. Of course, in the embodiment of the present invention, the composition position recommendation function may be turned on by default.
In an embodiment of the present invention, the number of target composition positions may correspond to the number of photographic subjects. In the embodiment of the present invention, the correspondence of the number of target composition positions to the number of at least one photographic subject may cover that the number of target composition positions is the same as the number of photographic subjects, that the target composition position is N times the number of photographic subjects, that the number of photographic subjects is N times the target composition position, and the like, where N may be a positive integer.
In an embodiment of the present invention, a target composition position for an image element and a number of persons can be specified in the composition policy. For example, if a road is included in the image element, a plurality of positions on the road may be determined as the target composition position in the composition policy, where the number of the plurality of positions is equal to the number of persons who take the picture. For another example, if the image element contains sky, multiple positions below the sky may be determined as target composition positions in the composition strategy, where the number of the multiple positions is equal to the number of persons taking the picture, and so on. Here, by way of example only, in the embodiment of the present invention, once the composition policy is obtained, a plurality of target composition positions with the same number as the number of persons who shot may be determined based on the association between the target composition positions specified in the composition policy and the image elements and the number of persons who shot, and target composition identifiers corresponding to the target composition positions may be displayed in the shooting preview interface.
In the embodiment of the present invention, the target composition identifier may be numbered and displayed in the form of a number. The number display mode can more intuitively display the total number of the target composition identifications and the current target composition identification is the number of the target composition identifications, thereby helping the user to know the related information.
In the embodiment of the present invention, after the target composition positions are determined based on the composition strategy, the target composition identifiers corresponding to all the target composition positions may be displayed on the shooting preview interface for a period of time, for example, 10 seconds, and after the period of time expires, only one target composition identifier (for example, a first target composition identifier) of the target composition identifiers is displayed instead of all the target composition identifiers, and after the shooting object corresponding to the target composition position is in position, a position prompt is displayed, and then a next target composition identifier (for example, a second target composition identifier) is displayed in the shooting preview interface. Thus, the target composition marks are displayed one by one, so that the attention of a user can be attracted more easily, and the shooting object can be positioned quickly.
Of course, in the embodiment of the present invention, after the target composition position is determined based on the composition policy, the target composition identifiers corresponding to all the target composition positions may be always displayed on the shooting preview interface. Wherein, the target composition mark can be displayed in a flashing manner, for example, to remind the user of which target composition position is currently aimed. The user can know the positions of all the target composition more clearly and clearly in the whole process by displaying all the target composition marks on the shooting preview interface all the time.
According to the embodiment of the invention, by introducing the composition strategy, the composition strategy is used for determining the target composition position in the shooting preview interface of the camera device, and the target composition mark in the shooting preview interface is displayed based on the composition strategy, so that when the images formed by all the shooting objects in the shooting preview interface are positioned at the corresponding target composition position, the shooting operation can be executed. In the process, as the target composition position of the shooting object is determined based on the composition strategy, the human subjective factors of the shooting person are reduced, and a better shooting image can be generated.
The invention will be further explained with reference to specific embodiments in the following with reference to the drawings.
Fig. 2 shows a method of photographing a plurality of persons. As shown in fig. 2, the method for photographing a plurality of persons according to the embodiment of the present invention may include:
step 210, turn on composition position recommendation function.
The terminal device may provide a switch entry for a composition position recommendation function. After the photographing preview interface is displayed by the photographing apparatus (e.g., a camera), the composition position recommendation function may be turned on according to a user selection, or may be turned on by default. After the composition position recommending function is started, a multi-person composition guide interface can be displayed, and an input box can be displayed in the interface, for example, the user can input the number of people in group pictures and the like.
And step 220, acquiring a composition strategy, wherein the composition strategy is used for determining at least one target composition position in a shooting preview interface of the camera device.
The step may specifically be: acquiring the number of the at least one photographic object; acquiring image elements in the shooting preview interface; and selecting a composition strategy from a composition strategy library according to the characteristics of the image elements and the number of the at least one shooting object. Specific ways of selecting the composition strategy can be found in the above description.
Step 230, determining at least one target composition position based on the composition strategy, and displaying a target composition mark indicating the target composition position in the shooting preview interface. Wherein the number of target composition identifications corresponds to the number of at least one photographic subject.
Step 240, detecting whether an image formed by a target photographic subject in the photographic preview interface is displayed at a target composition position corresponding to the target photographic subject in the at least one target composition position.
And 250, if the image formed by the target shooting object in the shooting preview interface is displayed at the target composition position, displaying a positioning prompt at the target composition position.
Wherein the seated prompt may include at least one of a text prompt, a voice prompt, and an animated prompt. For example, the seated cue may be animated to have an interest.
If the target photographic subject is not the last photographic subject of the at least one photographic subject, the target composition position of the next photographic subject may be displayed and execution continues back to step 240 until the last photographic subject. Step 270 may be performed if the target photographic subject is the last photographic subject of the at least one photographic subject.
And step 260, if the image formed by the target shooting object in the shooting preview interface is not displayed at the target composition position, displaying a guide mark pointing to the target composition position on the shooting preview interface.
And step 270, when the image formed by each shooting object in the shooting preview interface is located at the corresponding target composition position, executing shooting operation and outputting a target image.
The embodiment of the invention determines the target composition position and displays the target shooting identification based on the composition strategy, and the composition strategy fully considers the image elements and the number of the shooting people in the shooting preview interface, so that the selected target composition position can better reflect the actual situation during shooting, the selection of the target composition position is ensured to have certain objectivity, the human subjective factors of a photographer are reduced, and a better shot image can be generated.
Referring to fig. 3A to 3D, taking 5-person photo taking as an example, that is, the user needs to take a 5-person photo, and the subjects (i.e., the subjects) to be photographed are A, B, C, D, E respectively. After the composition strategy is acquired, the determined target composition positions are station positions side by side along the road, and 5 target shooting identifiers corresponding to the target composition positions can be displayed on the shooting preview interface, as shown in fig. 3A, where reference numeral 302 in fig. 3A represents a tree in the shooting preview interface, reference numeral 304 represents a road in the shooting preview interface, and reference numeral 306 represents a target shooting identifier in the shooting preview interface. Initially, 5 target shot identifiers may be displayed (e.g., five virtual circles are displayed side-by-side along the area of the road in the shot preview interface in fig. 3A), and only the current target shot identifier is displayed after a period of time. For example, 5 virtual circles may flash simultaneously at 1 second intervals to prompt the user that the virtual circles may disappear after a total of 5 flashes. Then, the first virtual circle may be displayed in order from left to right in the photographing preview interface, while the current sequential numbers are displayed within the virtual circle. Specifically, as shown in fig. 3B, fig. 3B shows the target composition position of the 1 st photographic subject (i.e., photographic subject a). When the image of the first photographic subject formed in the photographic preview interface is at the first target composition position, a seating prompt may be displayed (e.g., virtual circle 1 shows a jump up and down animation, which lasts for 3 seconds, after 3 seconds, the number 1 and animation may disappear, indicating that the first photographic subject is already in position), and a second target composition position is indicated with a second target photographic indicator 306, as shown in fig. 3C. When the image formed by the second photographic subject in the photographic preview interface is at the second target composition position, a positioning prompt can be displayed, and a third target composition position can be displayed with the target photographic identification. This is repeated until the last photographic subject (i.e., the 5 th photographic subject). As shown in fig. 3D, when the image formed by the last photographic subject in the photographic preview interface is located at the fifth target composition position, the text "composition is completed, please start photographing" may be displayed on the photographic preview interface, and a photographed image is generated based on the images formed by the 5 photographic subjects (i.e., the photographic subject A, B, C, D, E) in the photographic preview interface. A schematic diagram of the formed photographed image can be shown in fig. 3D.
In the above process, for the display order problem of each photographic subject in the 5 photographic subjects, the display order from left to right or from right to left may be defaulted, or each person may count up first, and then the display order of each photographic subject is determined according to the count-up order.
It should be understood that the exemplary illustrations given with respect to fig. 3A to 3D are only for the purpose of better understanding the technical solutions of the present invention by those skilled in the art, and are not intended to be limiting.
Fig. 4 is a block diagram of a terminal according to an embodiment of the present invention. Referring to fig. 4, a terminal 400 provided in an embodiment of the present invention may include: an acquisition module 401, a display module 402 and a processing module 403. Wherein:
an obtaining module 401, configured to obtain N target composition positions;
a display module 402, configured to display N target composition identifiers on a shooting preview interface, where the N composition identifiers indicate N target composition positions;
a processing module 403, configured to perform a shooting operation and output a target image when each of the N subjects displayed in the shooting preview interface is located at a corresponding target composition position;
wherein N is an integer greater than 1.
In the embodiment of the present invention, the target composition identifier indicating the target composition position is displayed, so that when all the subjects displayed in the shooting preview interface are located at the corresponding target composition positions, the shooting operation can be performed, and the target image is output. In the process, the target composition mark is used as an explicit guide for the position of the target composition, so that the human subjective factors of a photographer are reduced, a better shot image can be generated, and the problem that the smart phone cannot achieve a better shooting effect can be solved.
Optionally, in an embodiment of the present invention, the obtaining module 401 may be specifically configured to:
acquiring a preview image acquired by a camera;
acquiring the number N of shot objects in the preview image;
performing feature recognition on the preview image to obtain the feature of each image element;
and determining N target composition positions based on the number N of the shooting objects and the characteristics of each image element.
Optionally, in an embodiment of the present invention, the display module 402 may be specifically configured to:
displaying an ith target composition mark on a shooting preview interface, wherein the ith composition mark indicates an ith target composition position;
determining an i +1 th target composition mark;
under the condition that an ith shooting object displayed in a shooting preview interface is located at the ith target composition position, displaying an (i + 1) th target composition mark on the shooting preview interface, wherein the (i + 1) th composition mark indicates the (i + 1) th target composition position;
wherein i is an integer greater than 1, and i is not more than N.
Optionally, in an embodiment of the present invention, when determining the (i + 1) th target composition identifier, the display module 402 may be specifically configured to:
acquiring a display sequence of composition marks input by a user;
and determining the (i + 1) th target composition mark based on the display sequence of the composition marks.
Optionally, in an embodiment of the present invention, the display module 402 may be further configured to:
after displaying the ith target composition mark on a shooting preview interface, displaying prompt information at the ith target composition position under the condition that the ith shooting object displayed in the shooting preview interface is positioned at the ith target composition position, wherein the prompt information is used for indicating that the composition at the ith target composition position is finished.
Optionally, in an embodiment of the present invention, the display module 402 may be further configured to: before displaying the ith target composition mark, displaying all target composition marks on a shooting preview interface;
the processing module 403 may also be configured to: and in the case that a first input of the user is received within a preset time period, updating the display position of at least one target composition identifier in response to the first input.
It is to be understood that the terminal herein may be the terminal mentioned hereinafter.
Fig. 5 is a schematic diagram of a hardware structure of a terminal for implementing various embodiments of the present invention.
The terminal 500 includes but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the terminal configuration shown in fig. 5 is not intended to be limiting, and that the terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, and the like.
The processor 510 is configured to acquire a pre-shooting position of a shooting object, where the pre-shooting position is a position determined for the shooting object on a shooting preview interface of a terminal; and responding to specific operation of a user, and displaying a target image corresponding to the shooting object at the pre-shooting position in the shooting preview interface.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The terminal provides wireless broadband internet access to the user through the network module 502, such as helping the user send and receive e-mails, browse web pages, access streaming media, and the like.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the terminal 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The terminal 500 can also include at least one sensor 505, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 5061 and/or a backlight when the terminal 500 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 5, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the terminal, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the terminal, and is not limited herein.
The interface unit 508 is an interface for connecting an external device to the terminal 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the terminal 500 or may be used to transmit data between the terminal 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the terminal. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The terminal 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 through a power management system, so that functions of managing charging, discharging, and power consumption are performed through the power management system.
In addition, the terminal 500 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a terminal, including a processor 510, a memory 509, and a computer program stored in the memory 509 and capable of running on the processor 510, where the computer program, when executed by the processor 510, implements the steps in any of the image generation methods mentioned above, and can achieve the same technical effects, and in order to avoid repetition, details are not described here again.
According to the embodiment of the invention, by introducing the composition strategy, the composition strategy is used for determining at least one target composition position in the shooting preview interface of the terminal, and the at least one target composition position in the shooting preview interface is displayed based on the composition strategy, so that when images formed by all shooting objects in the shooting preview interface are located at the corresponding target composition positions, the shooting images can be generated. In the process, as the target composition position of the shooting object is determined based on the composition strategy, the human subjective factors of the shooting person are reduced, and a better shooting image can be generated.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps in any of the image generation methods described above are implemented, and the same technical effects can be achieved. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include transitory computer readable media (transmyedia) such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present invention, and is not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (12)

1. A method for photographing a plurality of persons, comprising:
acquiring N target composition positions;
displaying N target composition marks on a shooting preview interface, wherein the N composition marks indicate N target composition positions;
executing shooting operation and outputting a target image under the condition that each shooting object in the N shooting objects displayed in the shooting preview interface is located at the corresponding target composition position;
wherein N is an integer greater than 1;
the displaying N target composition marks on the shooting preview interface comprises the following steps:
displaying an ith target composition mark on a shooting preview interface, wherein the ith composition mark indicates an ith target composition position;
determining an i +1 th target composition mark;
under the condition that an ith shooting object displayed in a shooting preview interface is located at the ith target composition position, displaying an (i + 1) th target composition mark on the shooting preview interface, wherein the (i + 1) th composition mark indicates the (i + 1) th target composition position;
wherein i is an integer greater than 1, and i is not more than N.
2. The method of claim 1, wherein said obtaining N target composition positions comprises:
acquiring a preview image acquired by a camera;
acquiring the number N of shot objects in the preview image;
performing feature recognition on the preview image to obtain the feature of each image element;
and determining N target composition positions based on the number N of the shooting objects and the characteristics of each image element.
3. The method according to claim 1, wherein the determining the i +1 th target composition identifier comprises:
acquiring a display sequence of composition marks input by a user;
and determining the (i + 1) th target composition mark based on the display sequence of the composition marks.
4. The method according to claim 1, wherein after displaying the ith target composition identifier in the shooting preview interface, the method further comprises:
and under the condition that the ith shooting object displayed in the shooting preview interface is positioned at the ith target composition position, displaying prompt information at the ith target composition position, wherein the prompt information is used for indicating that the composition at the ith target composition position is finished.
5. The method according to claim 1, wherein before displaying the ith target composition identifier in the shooting preview interface, the method further comprises:
displaying all target composition marks on a shooting preview interface;
and in the case that a first input of the user is received within a preset time period, updating the display position of at least one target composition identifier in response to the first input.
6. A terminal, comprising:
an acquisition module for acquiring N target composition positions;
the display module is used for displaying N target composition marks on a shooting preview interface, wherein the N composition marks indicate N target composition positions;
the processing module is used for executing shooting operation and outputting a target image under the condition that each shooting object in the N shooting objects displayed in the shooting preview interface is located at the corresponding target composition position;
wherein N is an integer greater than 1;
the display module is specifically configured to:
displaying an ith target composition mark on a shooting preview interface, wherein the ith composition mark indicates an ith target composition position;
determining an i +1 th target composition mark;
under the condition that an ith shooting object displayed in a shooting preview interface is located at the ith target composition position, displaying an (i + 1) th target composition mark on the shooting preview interface, wherein the (i + 1) th composition mark indicates the (i + 1) th target composition position;
wherein i is an integer greater than 1, and i is not more than N.
7. The terminal of claim 6, wherein the obtaining module is specifically configured to:
acquiring a preview image acquired by a camera;
acquiring the number N of shot objects in the preview image;
performing feature recognition on the preview image to obtain the feature of each image element;
and determining N target composition positions based on the number N of the shooting objects and the characteristics of each image element.
8. The terminal according to claim 6, wherein when determining the i +1 th target composition identifier, the display module is specifically configured to:
acquiring a display sequence of composition marks input by a user;
and determining the (i + 1) th target composition mark based on the display sequence of the composition marks.
9. The terminal of claim 6, wherein the display module is further configured to:
after displaying the ith target composition mark on a shooting preview interface, displaying prompt information at the ith target composition position under the condition that the ith shooting object displayed in the shooting preview interface is positioned at the ith target composition position, wherein the prompt information is used for indicating that the composition at the ith target composition position is finished.
10. The terminal of claim 6, wherein the display module is further configured to: before displaying the ith target composition mark, displaying all target composition marks on a shooting preview interface;
the processing module is further configured to: and in the case that a first input of the user is received within a preset time period, updating the display position of at least one target composition identifier in response to the first input.
11. A terminal comprising a processor and a memory, said memory having stored thereon a computer program which, when executed by said processor, carries out the steps of the method for photographing a plurality of persons according to any one of claims 1 to 5.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1-5.
CN201810912222.9A 2018-08-10 2018-08-10 Multi-person photographing method, terminal and storage medium Active CN108848313B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810912222.9A CN108848313B (en) 2018-08-10 2018-08-10 Multi-person photographing method, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810912222.9A CN108848313B (en) 2018-08-10 2018-08-10 Multi-person photographing method, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN108848313A CN108848313A (en) 2018-11-20
CN108848313B true CN108848313B (en) 2020-11-06

Family

ID=64195567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810912222.9A Active CN108848313B (en) 2018-08-10 2018-08-10 Multi-person photographing method, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN108848313B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109905601B (en) * 2019-03-27 2021-01-15 联想(北京)有限公司 Photographing method and electronic equipment
CN111010511B (en) * 2019-12-12 2021-08-10 维沃移动通信有限公司 Panoramic body-separating image shooting method and electronic equipment
CN113055581B (en) * 2019-12-26 2022-07-22 北京百度网讯科技有限公司 Image shooting method and device and electronic equipment
CN111182208B (en) * 2019-12-31 2021-09-10 Oppo广东移动通信有限公司 Photographing method and device, storage medium and electronic equipment
WO2022178724A1 (en) * 2021-02-24 2022-09-01 深圳市大疆创新科技有限公司 Image photographing method, terminal device, photographing apparatus, and storage medium
CN113301251B (en) * 2021-05-20 2023-10-20 努比亚技术有限公司 Auxiliary shooting method, mobile terminal and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574006A (en) * 2014-10-10 2016-05-11 阿里巴巴集团控股有限公司 Method and device for establishing photographing template database and providing photographing recommendation information
CN107360375A (en) * 2017-08-29 2017-11-17 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107888822A (en) * 2017-10-27 2018-04-06 珠海市魅族科技有限公司 Image pickup method, device, terminal and readable storage medium storing program for executing
CN108289174A (en) * 2018-01-25 2018-07-17 努比亚技术有限公司 A kind of image pickup method, mobile terminal and computer readable storage medium
CN108377339A (en) * 2018-05-07 2018-08-07 维沃移动通信有限公司 A kind of photographic method and camera arrangement

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102618495B1 (en) * 2015-01-18 2023-12-29 삼성전자주식회사 Apparatus and method for processing image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574006A (en) * 2014-10-10 2016-05-11 阿里巴巴集团控股有限公司 Method and device for establishing photographing template database and providing photographing recommendation information
CN107360375A (en) * 2017-08-29 2017-11-17 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107888822A (en) * 2017-10-27 2018-04-06 珠海市魅族科技有限公司 Image pickup method, device, terminal and readable storage medium storing program for executing
CN108289174A (en) * 2018-01-25 2018-07-17 努比亚技术有限公司 A kind of image pickup method, mobile terminal and computer readable storage medium
CN108377339A (en) * 2018-05-07 2018-08-07 维沃移动通信有限公司 A kind of photographic method and camera arrangement

Also Published As

Publication number Publication date
CN108848313A (en) 2018-11-20

Similar Documents

Publication Publication Date Title
CN108848313B (en) Multi-person photographing method, terminal and storage medium
CN109639970B (en) Shooting method and terminal equipment
CN109361865B (en) Shooting method and terminal
CN111355889B (en) Shooting method, shooting device, electronic equipment and storage medium
US11551726B2 (en) Video synthesis method terminal and computer storage medium
CN110365907B (en) Photographing method and device and electronic equipment
CN106791893A (en) Net cast method and device
CN108777766B (en) Multi-person photographing method, terminal and storage medium
CN109068055B (en) Composition method, terminal and storage medium
CN108712603B (en) Image processing method and mobile terminal
CN108174103B (en) Shooting prompting method and mobile terminal
CN108182271B (en) Photographing method, terminal and computer readable storage medium
CN108924412B (en) Shooting method and terminal equipment
CN108513067B (en) Shooting control method and mobile terminal
CN110933468A (en) Playing method, playing device, electronic equipment and medium
US20180107869A1 (en) Method and apparatus for identifying gesture
CN109618218B (en) Video processing method and mobile terminal
CN108984143B (en) Display control method and terminal equipment
CN112052897A (en) Multimedia data shooting method, device, terminal, server and storage medium
CN108174110B (en) Photographing method and flexible screen terminal
CN111752450A (en) Display method and device and electronic equipment
CN110650367A (en) Video processing method, electronic device, and medium
CN108549660B (en) Information pushing method and device
CN111083374B (en) Filter adding method and electronic equipment
CN111064888A (en) Prompting method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant