CN110830723B - Shooting method, shooting device, storage medium and mobile terminal - Google Patents
Shooting method, shooting device, storage medium and mobile terminal Download PDFInfo
- Publication number
- CN110830723B CN110830723B CN201911207012.0A CN201911207012A CN110830723B CN 110830723 B CN110830723 B CN 110830723B CN 201911207012 A CN201911207012 A CN 201911207012A CN 110830723 B CN110830723 B CN 110830723B
- Authority
- CN
- China
- Prior art keywords
- target object
- synthesized
- shooting
- pictures
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses a shooting method, a shooting device, a storage medium and a mobile terminal, wherein the method comprises the following steps: receiving a user selection instruction in a shooting interface, and determining a target object; acquiring a plurality of pictures to be synthesized of the target object; when the target object moves on the shooting interface, if a shooting instruction is detected, synthesizing the multiple pictures to be synthesized to obtain a synthesized picture of the target object; and displaying the synthesized picture of the target object. The quality of pictures shot by the mobile terminal or the shooting object when moving can be improved, and repeated operation of a user is avoided.
Description
Technical Field
The present application relates to the field of communications, and in particular, to a shooting method, an apparatus, a storage medium, and a mobile terminal.
Background
In recent years, mobile terminals such as mobile phones and tablet computers are increasingly favored by users due to their portability, and a photographing function is one of the most common and important functions of the mobile terminals.
In the related art, when a user shoots a certain scene or object, if the mobile terminal or the shooting object moves, the mobile terminal cannot focus on the shooting object, so that a shot picture is blurred, the picture quality is low, and the user needs to perform re-operation when shooting again.
Disclosure of Invention
The embodiment of the application provides a shooting method, which can improve the quality of pictures shot when a mobile terminal or a shooting object moves.
The embodiment of the application provides a shooting method, which comprises the following steps:
receiving a user selection instruction in a shooting interface, and determining a target object;
acquiring a plurality of pictures to be synthesized of the target object;
when the target object moves on the shooting interface, if a shooting instruction is detected, synthesizing the multiple pictures to be synthesized to obtain a synthesized picture of the target object;
and displaying the synthesized picture of the target object.
The embodiment of the present application further provides a shooting device, including:
the determining unit is used for receiving a user selection instruction in the shooting interface and determining a target object;
the acquisition unit is used for acquiring a plurality of pictures to be synthesized of the target object;
the synthesis unit is used for synthesizing the plurality of pictures to be synthesized to obtain a synthesized picture of the target object if a shooting instruction is detected when the target object moves on the shooting interface;
and the display unit is used for displaying the synthesized picture of the target object.
An embodiment of the present application further provides a storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the shooting method as described above.
The embodiment of the application further provides a mobile terminal, which includes a processor and a memory, wherein the memory stores a computer program, and the processor is used for executing the shooting method described above by calling the computer program stored in the memory.
The shooting method provided by the embodiment of the application comprises the following steps: receiving a user selection instruction in a shooting interface, and determining a target object; acquiring a plurality of pictures to be synthesized of the target object; when the target object moves on the shooting interface, if a shooting instruction is detected, synthesizing the multiple pictures to be synthesized to obtain a synthesized picture of the target object; and displaying the synthesized picture of the target object. The quality of pictures shot by the mobile terminal or the shooting object when moving can be improved, and repeated operation of a user is avoided.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a first flowchart of a shooting method according to an embodiment of the present disclosure.
Fig. 2 is a second flowchart of the shooting method according to the embodiment of the present application.
Fig. 3 is a schematic structural diagram of a shooting device according to an embodiment of the present application.
Fig. 4 is a specific structural diagram of a mobile terminal according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a first flowchart of a shooting method according to an embodiment of the present disclosure. The shooting method comprises the following steps:
Specifically, the shooting interface here is an interface displayed on a screen of the mobile terminal when the camera module is triggered to shoot after a user starts a shooting mode of the mobile terminal, and the user can select a target object to be shot on the shooting interface by clicking the screen or pressing a physical key. The mobile terminal can be a mobile phone, a tablet computer, a notebook computer and other devices, and various applications required by a user, such as applications with an entertainment function (e.g., a video application, an audio playing application, a game application, and reading software) and applications with a service function (e.g., a map navigation application, a dining application, and the like), can be installed in the mobile terminal.
Specifically, when the mobile terminal starts the shooting mode, a plurality of moving objects can be predetermined and marked, and when a user executes a selection instruction, the user can select among the plurality of moving objects marked. The marking method may be used to circle the approximate range of the moving object, or may be other methods, which are not described herein.
For example: when the mobile terminal starts a shooting mode, n moving objects are predetermined, and m target objects in the n moving objects are determined by a user, wherein m is more than or equal to 1 and less than or equal to n.
And 102, acquiring a plurality of pictures to be synthesized of the target object.
Specifically, after one or more target objects are selected, the target objects are divided into a plurality of pieces of partial information, the pieces of partial information are combined to form the target objects, and the pieces of partial information are pictures to be synthesized. Specifically, when the pictures to be synthesized are obtained, the pictures to be synthesized can be grouped, so that a plurality of pictures to be synthesized exist in each picture group to be synthesized. Each group of pictures to be synthesized and the pictures to be synthesized in each group of pictures to be synthesized can be marked, so that when the information of each part of the target object forms the virtual image of the object, the information can be sequentially output according to the size sequence of the marks.
Thus, step 102 may comprise:
and decomposing the target object, and dividing the target object into a plurality of groups of pictures to be synthesized, wherein each group of pictures to be synthesized contains a plurality of pictures to be synthesized. For example: each target object has N groups of pictures to be synthesized, and each group has k pictures to be synthesized.
Since the target object displayed in the display screen of the mobile terminal is digital information, the digital information of the target object needs to be converted into corresponding analog information for subsequent synthesis, and thus after a plurality of pictures to be synthesized of the target object are obtained, the method may further include:
and converting the plurality of pictures to be synthesized into the plurality of pieces of virtual picture information.
And 103, when the target object moves on the shooting interface, if a shooting instruction is detected, synthesizing the multiple pictures to be synthesized to obtain a synthesized picture of the target object.
Specifically, when it is detected that the target object is in a moving state in the shooting interface and the user executes the shooting instruction, the multiple images to be synthesized of the target object acquired in step 102 are synthesized to obtain a synthesized image of the target object.
Since the plurality of pictures to be synthesized are converted into the plurality of pieces of virtual picture information in step 102, the plurality of pieces of virtual picture information are synthesized during synthesis to obtain a synthesized picture of the target object. Therefore, synthesizing a plurality of pictures to be synthesized to obtain a synthesized picture of the target object includes:
and synthesizing the information of the plurality of virtual pictures to obtain a synthesized picture of the target object.
Specifically, how to determine whether the target object moves in the shooting interface may be performed by synthesizing information of a plurality of virtual pictures into a virtual picture to compare whether the virtual picture appears in the shooting interface. If the target object moves in the shooting interface, after a shooting instruction of a user is received, the plurality of pictures to be synthesized are synthesized, and a synthesized picture of the target object is obtained. Therefore, when the target object moves on the shooting interface, if a shooting instruction is detected, synthesizing a plurality of pictures to be synthesized to obtain a synthesized picture of the target object, including:
synthesizing the information of the plurality of virtual pictures into a virtual picture of a target object;
detecting whether the target object moves on the shooting interface or not according to the virtual image of the target object;
and when the target object is detected to move in the shooting interface and a shooting instruction is detected, synthesizing the plurality of pictures to be synthesized to obtain a synthesized picture of the target object.
If the virtual map is detected not to appear in the shooting interface, a preset time length can be set, and if the virtual map does not appear in the shooting interface in the preset time length, the virtual information and the virtual map of the target object which are obtained before are cleared, so that excessive memory occupation is avoided. Thus, step 103 further comprises:
when the virtual image of the target object is not in the shooting interface, waiting for a preset time length;
and if the waiting time length exceeds the preset time length, clearing the virtual information of the target object and the virtual image of the target object.
And 104, displaying the synthesized picture of the target object.
After the synthesized picture of the target object is obtained, the synthesized picture needs to be displayed to a user, so that when the user executes a shooting instruction, a picture which is relatively fuzzy about the target object can be obtained, the approximate position of the target object in the picture which is relatively fuzzy about the target object can be matched according to the virtual picture of the target object, and the synthesized picture is placed at the position. Thus, step 104 may comprise:
determining a target position of the target object in a photographing interface when receiving a photographing instruction according to the virtual image of the target object;
and placing the synthesized picture of the target object at the target position of the photographing interface.
The shooting method provided by the embodiment of the application comprises the following steps: receiving a user selection instruction in a shooting interface, and determining a target object; acquiring a plurality of pictures to be synthesized of the target object; when the target object moves on the shooting interface, if a shooting instruction is detected, synthesizing the multiple pictures to be synthesized to obtain a synthesized picture of the target object; and displaying the synthesized picture of the target object. The quality of pictures shot by the mobile terminal or the shooting object when moving can be improved, and repeated operation of a user is avoided.
Referring to fig. 2, fig. 2 is a second flowchart of a shooting method according to an embodiment of the present disclosure.
The method comprises the following steps:
step 201, receiving a user selection instruction in a shooting interface, and determining a target object.
Specifically, the shooting interface here is an interface displayed on a screen of the mobile terminal when the camera module is triggered to shoot after a user starts a shooting mode of the mobile terminal, and the user can select a target object to be shot on the shooting interface by clicking the screen or pressing a physical key. The mobile terminal can be a mobile phone, a tablet computer, a notebook computer and other devices, and various applications required by a user, such as applications with an entertainment function (e.g., a video application, an audio playing application, a game application, and reading software) and applications with a service function (e.g., a map navigation application, a dining application, and the like), can be installed in the mobile terminal.
Specifically, when the mobile terminal starts the shooting mode, a plurality of moving objects can be predetermined and marked, and when a user executes a selection instruction, the user can select among the plurality of moving objects marked. The marking method may be used to circle the approximate range of the moving object, or may be other methods, which are not described herein.
For example: when the mobile terminal starts a shooting mode, n moving objects are predetermined, and m target objects in the n moving objects are determined by a user, wherein m is more than or equal to 1 and less than or equal to n.
Specifically, after one or more target objects are selected, the target objects are divided into a plurality of pieces of partial information, the pieces of partial information are combined to form the target objects, and the pieces of partial information are pictures to be synthesized. Specifically, when the pictures to be synthesized are obtained, the pictures to be synthesized can be grouped, so that a plurality of pictures to be synthesized exist in each picture group to be synthesized. Each group of pictures to be synthesized and the pictures to be synthesized in each group of pictures to be synthesized can be marked, so that when the information of each part of the target object forms the virtual image of the object, the information can be sequentially output according to the size sequence of the marks.
Since the target object displayed on the display screen of the mobile terminal is digital information, the digital information of the target object needs to be converted into corresponding analog information for subsequent synthesis.
And 204, synthesizing the information of the plurality of virtual pictures into a virtual picture of the target object.
Specifically, how to determine whether the target object moves in the shooting interface may be performed by synthesizing information of a plurality of virtual pictures into a virtual picture to compare whether the virtual picture appears in the shooting interface. If the target object moves in the shooting interface, after a shooting instruction of a user is received, the plurality of pictures to be synthesized are synthesized, and a synthesized picture of the target object is obtained.
And step 205, detecting whether the target object moves on the shooting interface according to the virtual graph of the target object.
Wherein, the shooting interface can be matched by the virtual graph synthesized in step 204.
And step 206, when the target object is detected to move in the shooting interface and the shooting instruction is detected, synthesizing the information of the plurality of virtual pictures to obtain a synthesized picture of the target object.
And step 207, determining the target position of the target object in the photographing interface when the photographing instruction is received according to the virtual image of the target object.
After the synthesized picture of the target object is obtained, the synthesized picture needs to be displayed to a user, so that when the user executes a shooting instruction, a picture which is relatively fuzzy about the target object can be obtained, the approximate position of the target object in the picture which is relatively fuzzy about the target object can be matched according to the virtual picture of the target object, and the synthesized picture is placed at the position.
And step 208, placing the synthesized picture of the target object at the target position of the photographing interface.
And 209, waiting for a preset time when the virtual image of the target object is not in the shooting interface.
If the virtual map is detected not to appear in the shooting interface, a preset time length can be set, and if the virtual map does not appear in the shooting interface in the preset time length, the virtual information and the virtual map of the target object which are obtained before are cleared, so that excessive memory occupation is avoided.
And step 210, if the waiting time length exceeds the preset time length, clearing the virtual information of the target object and the virtual image of the target object.
The shooting method provided by the embodiment of the application comprises the following steps: receiving a user selection instruction in a shooting interface, and determining a target object; acquiring a plurality of pictures to be synthesized of the target object; when the target object moves on the shooting interface, if a shooting instruction is detected, synthesizing the multiple pictures to be synthesized to obtain a synthesized picture of the target object; and displaying the synthesized picture of the target object. The quality of pictures shot by the mobile terminal or the shooting object when moving can be improved, and repeated operation of a user is avoided.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a shooting device according to an embodiment of the present disclosure. The photographing apparatus includes: a determination unit 31, an acquisition unit 32, a synthesis unit 33, and a presentation unit 34.
The determining unit 31 is configured to receive a user selection instruction in the shooting interface, and determine the target object.
Specifically, the shooting interface here is an interface displayed on a screen of the mobile terminal when the camera module is triggered to shoot after a user starts a shooting mode of the mobile terminal, and the user can select a target object to be shot on the shooting interface by clicking the screen or pressing a physical key. The mobile terminal can be a mobile phone, a tablet computer, a notebook computer and other devices, and various applications required by a user, such as applications with an entertainment function (e.g., a video application, an audio playing application, a game application, and reading software) and applications with a service function (e.g., a map navigation application, a dining application, and the like), can be installed in the mobile terminal.
Specifically, when the mobile terminal starts the shooting mode, a plurality of moving objects can be predetermined and marked, and when a user executes a selection instruction, the user can select among the plurality of moving objects marked. The marking method may be used to circle the approximate range of the moving object, or may be other methods, which are not described herein.
For example: when the mobile terminal starts a shooting mode, n moving objects are predetermined, and m target objects in the n moving objects are determined by a user, wherein m is more than or equal to 1 and less than or equal to n.
The obtaining unit 32 is configured to obtain multiple images to be synthesized of the target object.
Specifically, after one or more target objects are selected, the target objects are divided into a plurality of pieces of partial information, the pieces of partial information are combined to form the target objects, and the pieces of partial information are pictures to be synthesized. Specifically, when the pictures to be synthesized are obtained, the pictures to be synthesized can be grouped, so that a plurality of pictures to be synthesized exist in each picture group to be synthesized. Each group of pictures to be synthesized and the pictures to be synthesized in each group of pictures to be synthesized can be marked, so that when the information of each part of the target object forms the virtual image of the object, the information can be sequentially output according to the size sequence of the marks.
And a synthesizing unit 33, configured to synthesize the multiple pictures to be synthesized to obtain a synthesized picture of the target object if a shooting instruction is detected when the target object moves on the shooting interface.
Specifically, when it is detected that the target object is in a moving state in the shooting interface and the user executes the shooting instruction, the multiple acquired pictures to be synthesized of the target object are synthesized to obtain a synthesized picture of the target object.
And the display unit 34 is used for displaying the synthesized picture of the target object.
After the synthesized picture of the target object is obtained, the synthesized picture needs to be displayed to a user, so that when the user executes a shooting instruction, a picture which is relatively fuzzy about the target object can be obtained, the approximate position of the target object in the picture which is relatively fuzzy about the target object can be matched according to the virtual picture of the target object, and the synthesized picture is placed at the position.
In some embodiments, the obtaining unit 32 may include:
and the decomposition subunit is used for decomposing the target object and dividing the target object into a plurality of picture groups to be synthesized, wherein each picture group to be synthesized comprises a plurality of pictures to be synthesized.
In some embodiments, the photographing apparatus may include:
and the conversion unit is used for converting the plurality of pictures to be synthesized into the plurality of pieces of virtual picture information.
In some embodiments, the synthesis unit 33 may include:
and the first synthesis subunit is used for synthesizing the plurality of pieces of virtual picture information to obtain a synthesized picture of the target object.
In some embodiments, the synthesis unit 33 may further include:
the second synthesis subunit is used for synthesizing the plurality of pieces of virtual picture information into a virtual image of the target object;
the detection subunit is used for detecting whether the target object moves on the shooting interface or not according to the virtual graph of the target object;
and the third synthesis subunit is used for synthesizing the plurality of pictures to be synthesized to obtain a synthesized picture of the target object when the target object is detected to move in the shooting interface and a shooting instruction is detected.
In some embodiments, the synthesis unit 33 may further include:
the waiting subunit is used for waiting for a preset time length when the virtual image of the target object is not in the shooting interface;
and the clearing subunit is configured to clear the virtual information of the target object and the virtual map of the target object if the waiting duration exceeds the preset duration.
In some embodiments, presentation unit 34 may further include:
the determining subunit is configured to determine, according to the virtual map of the target object, a target position of the target object in the photographing interface when the photographing instruction is received;
and the placing subunit is used for placing the synthesized picture of the target object at the target position of the photographing interface.
Based on the above method, the present invention also provides a storage medium having a plurality of instructions stored thereon, wherein the instructions are adapted to be loaded by a processor and to perform the photographing method as described above.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Fig. 4 is a block diagram showing a specific structure of a terminal according to an embodiment of the present invention, where the terminal may be used to implement the shooting method, the storage medium, and the terminal provided in the above embodiments.
As shown in fig. 4, the mobile terminal 1200 may include an RF (Radio Frequency) circuit 110, a memory 120 including one or more computer-readable storage media (only one shown), an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, a transmission module 170, a processor 180 including one or more processing cores (only one shown), and a power supply 190. Those skilled in the art will appreciate that the mobile terminal 1200 configuration illustrated in fig. 4 is not intended to be limiting of the mobile terminal 1200 and may include more or less components than those illustrated, or some components in combination, or a different arrangement of components. Wherein:
the RF circuitry 110 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF circuit 110 may communicate with various networks such as the internet, an intranet, a wireless network, or with a second device over a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network.
The memory 120 may be configured to store software programs and modules, such as the bright screen duration control method, apparatus, storage medium, and program instructions/modules corresponding to the mobile terminal in the foregoing embodiments, and the processor 180 executes various functional applications and data processing by operating the software programs and modules stored in the memory 120, so as to implement the function of mutual chip identification. Memory 120 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or a second non-volatile solid-state memory. In some examples, memory 120 may be a storage medium as described above.
The input unit 130 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 130 may include a touch-sensitive surface 131 as well as other input devices 132. The touch-sensitive surface 131, also referred to as a touch display screen or a touch pad, may collect touch operations by a user on or near the touch-sensitive surface 131 (e.g., operations by a user on or near the touch-sensitive surface 131 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 131 may comprise two parts, a touch detection means and a touch controller.
The display unit 140 may be used to display information input by or provided to the user and various graphic user interfaces of the mobile terminal 1200, which may be configured by graphics, text, icons, video, and any combination thereof. The display unit 140 may include a display panel 141, and further, the touch-sensitive surface 131 may cover the display panel 141. The display interface of the mobile terminal in the above embodiment may be represented by the display unit 140, that is, the display content displayed by the display interface may be displayed by the display unit 140.
The mobile terminal 1200 may also include at least one sensor 150, such as a light sensor, a motion sensor, and a second sensor. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 141 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 141 and/or the backlight when the mobile terminal 1200 is moved to the ear. As for the second sensor such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which may be further configured in the mobile terminal 1200, the detailed description is omitted here.
The mobile terminal 1200, which can help a user send and receive e-mails, browse web pages, access streaming media, etc., provides the user with wireless broadband internet access through the transmission module 170.
The processor 180 is a control center of the mobile terminal 1200, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile terminal 1200 and processes data by operating or executing software programs and/or modules stored in the memory 120 and calling data stored in the memory 120, thereby integrally monitoring the mobile phone. Optionally, processor 180 may include one or more processing cores; in some embodiments, the processor 180 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 180.
Specifically, the processor 180 includes: an Arithmetic Logic Unit (ALU), an application processor, a Global Positioning System (GPS) and a control and status Bus (Bus) (not shown).
The mobile terminal 1200 also includes a power supply 190 (e.g., a battery) for powering the various components, which may be logically coupled to the processor 180 via a power management system in some embodiments to provide management of power, and power consumption via the power management system. The power supply 190 may also include any component including one or more of a dc or ac power source, a re-power system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the mobile terminal 1200 may further include a camera (e.g., a front camera, a rear camera), a bluetooth module, and the like, which are not described in detail herein.
Specifically, in the present embodiment, the display unit 140 of the mobile terminal 1200 is a touch screen display, and the mobile terminal 1200 further includes a memory 120 and one or more programs, wherein the one or more programs are stored in the memory 120, and the one or more programs configured to be executed by the one or more processors 180 include instructions for:
receiving a user selection instruction in a shooting interface, and determining a target object;
acquiring a plurality of pictures to be synthesized of the target object;
when the target object moves on the shooting interface, if a shooting instruction is detected, synthesizing the multiple pictures to be synthesized to obtain a synthesized picture of the target object;
and displaying the synthesized picture of the target object.
In some embodiments, when obtaining the plurality of pictures to be synthesized of the target object, the processor 380 may further execute the instructions of:
and decomposing the target object, and dividing the target object into a plurality of groups of pictures to be synthesized, wherein each group of pictures to be synthesized contains a plurality of pictures to be synthesized.
In some embodiments, after the obtaining the plurality of pictures to be synthesized of the target object, the processor 380 may further execute the instructions of:
converting the plurality of pictures to be synthesized into a plurality of pieces of virtual picture information;
the synthesizing the plurality of pictures to be synthesized to obtain the synthesized picture of the target object comprises the following steps:
and synthesizing the information of the plurality of virtual pictures to obtain a synthesized picture of the target object.
In some embodiments, when the target object moves on the shooting interface, if a shooting instruction is detected, the processor 380 may further execute the following instructions when synthesizing the multiple pictures to be synthesized to obtain a synthesized picture of the target object:
synthesizing the plurality of pieces of virtual picture information into a virtual picture of the target object;
detecting whether the target object moves on the shooting interface or not according to the virtual graph of the target object;
and when the target object is detected to move in the shooting interface and a shooting instruction is detected, synthesizing the plurality of pictures to be synthesized to obtain a synthesized picture of the target object.
In some embodiments, processor 380 may also execute instructions to:
when the virtual image of the target object is not in the shooting interface, waiting for a preset time length;
and if the waiting time length exceeds the preset time length, clearing the virtual information of the target object and the virtual image of the target object.
In some embodiments, in presenting the synthesized picture of the target object, processor 380 may also execute instructions to:
determining a target position of the target object in the photographing interface when a photographing instruction is received according to the virtual image of the target object;
and placing the synthesized picture of the target object at the target position of the photographing interface.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The above detailed description is given to a shooting method, a shooting device, a shooting storage medium, and a shooting terminal provided in the embodiments of the present application, and a specific example is applied in the description to explain the principles and the implementation of the present application, and the description of the above embodiments is only used to help understanding the technical solutions and the core ideas of the present application; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the present disclosure as defined by the appended claims.
Claims (5)
1. A photographing method, characterized by comprising:
receiving a user selection instruction in a shooting interface, and determining a target object;
decomposing the target object and dividing the target object into a plurality of groups of pictures to be synthesized, wherein each group of pictures to be synthesized contains a plurality of pictures to be synthesized, and the pictures to be synthesized in the groups of pictures to be synthesized are combined to form the target object;
converting the plurality of pictures to be synthesized into a plurality of pieces of virtual picture information;
synthesizing the plurality of pieces of virtual picture information into a virtual picture of the target object;
detecting whether the target object moves on the shooting interface or not according to the virtual graph of the target object;
when the target object is detected to move in the shooting interface and a shooting instruction is detected, synthesizing the plurality of pieces of virtual picture information to obtain a synthesized picture of the target object;
determining a target position of the target object on the shooting interface when a shooting instruction is received according to the virtual diagram of the target object;
and placing the synthesized picture of the target object at the target position of the shooting interface.
2. The photographing method according to claim 1, wherein the method further comprises:
when the virtual image of the target object is not in the shooting interface, waiting for a preset time length;
and if the waiting time length exceeds the preset time length, clearing the virtual picture information of the target object and the virtual image of the target object.
3. A camera, comprising:
the determining unit is used for receiving a user selection instruction in the shooting interface and determining a target object;
the acquisition unit is used for decomposing the target object and dividing the target object into a plurality of picture groups to be synthesized, wherein each picture group to be synthesized comprises a plurality of pictures to be synthesized, and the pictures to be synthesized in the picture groups to be synthesized are combined to form the target object;
the conversion unit is used for converting the plurality of pictures to be synthesized into a plurality of pieces of virtual picture information;
the synthesis unit is used for synthesizing the plurality of pieces of virtual picture information into a virtual image of the target object;
detecting whether the target object moves on the shooting interface or not according to the virtual graph of the target object;
when the target object is detected to move in the shooting interface and a shooting instruction is detected, synthesizing the plurality of pieces of virtual picture information to obtain a synthesized picture of the target object;
the display unit is used for determining the target position of the target object on the shooting interface when receiving a shooting instruction according to the virtual graph of the target object;
and placing the synthesized picture of the target object at the target position of the shooting interface.
4. A computer-readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to execute the photographing method according to any one of claims 1 and 2.
5. A mobile terminal, characterized in that the mobile terminal comprises a processor and a memory, the memory having stored therein a computer program, the processor being configured to execute the photographing method according to any one of claims 1 and 2 by calling the computer program stored in the memory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911207012.0A CN110830723B (en) | 2019-11-29 | 2019-11-29 | Shooting method, shooting device, storage medium and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911207012.0A CN110830723B (en) | 2019-11-29 | 2019-11-29 | Shooting method, shooting device, storage medium and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110830723A CN110830723A (en) | 2020-02-21 |
CN110830723B true CN110830723B (en) | 2021-09-28 |
Family
ID=69542337
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911207012.0A Active CN110830723B (en) | 2019-11-29 | 2019-11-29 | Shooting method, shooting device, storage medium and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110830723B (en) |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5750864B2 (en) * | 2010-10-27 | 2015-07-22 | ソニー株式会社 | Image processing apparatus, image processing method, and program |
JP2015035640A (en) * | 2012-06-25 | 2015-02-19 | 株式会社ニコン | Imaging apparatus |
EP2792149A4 (en) * | 2011-12-12 | 2016-04-27 | Intel Corp | Scene segmentation using pre-capture image motion |
JP2013162333A (en) * | 2012-02-06 | 2013-08-19 | Sony Corp | Image processing device, image processing method, program, and recording medium |
JP5678911B2 (en) * | 2012-03-12 | 2015-03-04 | カシオ計算機株式会社 | Image composition apparatus, image composition method, and program |
CN106210495A (en) * | 2015-05-06 | 2016-12-07 | 小米科技有限责任公司 | Image capturing method and device |
US9930271B2 (en) * | 2015-09-28 | 2018-03-27 | Gopro, Inc. | Automatic composition of video with dynamic background and composite frames selected based on frame criteria |
CN107872614A (en) * | 2016-09-27 | 2018-04-03 | 中兴通讯股份有限公司 | A kind of image pickup method and filming apparatus |
KR20180092495A (en) * | 2017-02-09 | 2018-08-20 | 한국전자통신연구원 | Apparatus and method for Object of Interest-centric Best-view Generation in Multi-camera Video |
CN106993128A (en) * | 2017-03-02 | 2017-07-28 | 深圳市金立通信设备有限公司 | A kind of photographic method and terminal |
CN109167910A (en) * | 2018-08-31 | 2019-01-08 | 努比亚技术有限公司 | focusing method, mobile terminal and computer readable storage medium |
CN109120862A (en) * | 2018-10-15 | 2019-01-01 | Oppo广东移动通信有限公司 | High-dynamic-range image acquisition method, device and mobile terminal |
CN112672055A (en) * | 2020-12-25 | 2021-04-16 | 维沃移动通信有限公司 | Photographing method, device and equipment |
-
2019
- 2019-11-29 CN CN201911207012.0A patent/CN110830723B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110830723A (en) | 2020-02-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107885533B (en) | Method and device for managing component codes | |
CN108495045B (en) | Image capturing method, image capturing apparatus, electronic apparatus, and storage medium | |
US11363196B2 (en) | Image selection method and related product | |
CN109240577B (en) | Screen capturing method and terminal | |
CN108132790B (en) | Method, apparatus and computer storage medium for detecting a garbage code | |
CN109922356B (en) | Video recommendation method and device and computer-readable storage medium | |
CN111176602B (en) | Picture display method and device, storage medium and intelligent device | |
CN110769313B (en) | Video processing method and device and storage medium | |
CN111857793B (en) | Training method, device, equipment and storage medium of network model | |
CN105635553B (en) | Image shooting method and device | |
CN111539795A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN107330867B (en) | Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment | |
CN110677713B (en) | Video image processing method and device and storage medium | |
CN108510266A (en) | A kind of Digital Object Unique Identifier recognition methods and mobile terminal | |
CN112989198B (en) | Push content determination method, device, equipment and computer-readable storage medium | |
CN111556248B (en) | Shooting method, shooting device, storage medium and mobile terminal | |
CN105513098B (en) | Image processing method and device | |
CN108829600B (en) | Method and device for testing algorithm library, storage medium and electronic equipment | |
CN109922256B (en) | Shooting method and terminal equipment | |
CN108595104B (en) | File processing method and terminal | |
CN108763908B (en) | Behavior vector generation method, device, terminal and storage medium | |
CN113469322A (en) | Method, device, equipment and storage medium for determining executable program of model | |
CN110830723B (en) | Shooting method, shooting device, storage medium and mobile terminal | |
CN111182153B (en) | System language setting method and device, storage medium and mobile terminal | |
CN110264292A (en) | Determine the method, apparatus and storage medium of effective period of time |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |