CN110233966B - Image generation method and terminal - Google Patents

Image generation method and terminal Download PDF

Info

Publication number
CN110233966B
CN110233966B CN201910472529.6A CN201910472529A CN110233966B CN 110233966 B CN110233966 B CN 110233966B CN 201910472529 A CN201910472529 A CN 201910472529A CN 110233966 B CN110233966 B CN 110233966B
Authority
CN
China
Prior art keywords
image
terminal
target objects
ith
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910472529.6A
Other languages
Chinese (zh)
Other versions
CN110233966A (en
Inventor
李陈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN201910472529.6A priority Critical patent/CN110233966B/en
Publication of CN110233966A publication Critical patent/CN110233966A/en
Application granted granted Critical
Publication of CN110233966B publication Critical patent/CN110233966B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides an image generation method and a terminal, which are applied to the technical field of communication and are used for solving the problem that a panoramic image has double images or distortion. The method is applied to the terminal and comprises the following steps: shooting a first image; displaying at least two target objects on the first image; generating a target image based on the first image and the at least two target objects; wherein the at least two target objects are generated based on the first object in the first image or the at least two target objects comprise at least one second object in the second image. The method is particularly applied to scenes of panoramic images shot by the terminal.

Description

Image generation method and terminal
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image generation method and a terminal.
Background
Currently, in a scene of a panoramic image shot by a terminal, if a user needs an object to appear in one panoramic image for multiple times, the object is usually required to move within the scene range of the panoramic image for multiple times in the process of shooting the panoramic image.
However, in the process that the camera of the terminal and an object move simultaneously, the moving speed of the camera and the moving speed of the object may have a large difference, so that the image of the object in the panoramic image acquired by the terminal through the camera may have a problem of ghosting or distortion, that is, the panoramic image has a problem of ghosting or distortion.
Disclosure of Invention
The embodiment of the invention provides an image generation method and a terminal, which aim to solve the problem that a panoramic image has double images or distortion.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides an image generation method, which is applied to a terminal, and the method includes: shooting a first image; displaying at least two target objects on the first image; generating a target image based on the first image and the at least two target objects; wherein the at least two target objects are generated based on the first object in the first image or the at least two target objects comprise at least one second object in the second image.
In a second aspect, an embodiment of the present invention further provides a terminal, where the terminal includes: the device comprises a shooting module, a display module and a generation module; the shooting module is used for shooting a first image; the display module is used for displaying at least two target objects on the first image obtained by the shooting module; the generating module is used for generating a target image based on the first image and the at least two target objects displayed by the display module; wherein the at least two target objects are generated based on the first object in the first image or the at least two target objects comprise at least one second object in the second image.
Optionally, the at least two target objects are generated based on a first object in the first image; the display module is specifically used for receiving first input of a user to a first object in the first image; in response to a first input, the first object in the first image is copied at least once, at least two target objects are obtained, and each target object is displayed to a target position input by the first input.
Optionally, the at least two target objects comprise at least one second object in the second image; and the shooting module is also used for shooting a second image before the display module displays at least two target objects on the first image, and the background of the second image is the same as at least part of the background in the first image.
Optionally, the display module is specifically configured to acquire an ith second sub-area, which is the same as the background of the ith first sub-area in the first image, in the second image; displaying at least two target objects on the first image based on the ith first sub-area and the ith second sub-area; wherein i is a positive integer.
Optionally, the display module is specifically configured to replace the image content of the ith first sub-area with the image content of the ith second sub-area, where the ith second sub-area includes an ith second object.
Optionally, the display module is specifically configured to extract an ith object image of an ith second object in the ith second sub-region; and displaying the ith object image in a target area in the ith first sub-area, wherein the target area is an area corresponding to the display area of the ith second object in the second image in the ith first sub-area.
Optionally, the terminal includes a rotatable camera; the shooting module is specifically used for controlling the camera to rotate by a first angle according to a first direction and shooting in the rotating process to obtain a first image; the display module controls the camera to rotate by a second angle according to the first direction before displaying at least two target objects on the first image, and shooting is carried out in the rotating process to obtain a second image; wherein the second angle is less than or equal to the first angle.
In a third aspect, an embodiment of the present invention provides a terminal, including a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the image generation method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the image generation method according to the first aspect.
In an embodiment of the present invention, a first image may be captured; displaying at least two target objects on the first image, wherein the at least two target objects are generated based on the first object in the first image, or the at least two target objects comprise at least one second object in the second image; subsequently, a target image may be generated based on the first image and the at least two target objects. Based on the method, the terminal can obtain a target image comprising at least two target objects according to at least one second object in the second image or the first object in the first panoramic image. Without the need of continuously moving the shooting object (such as a person) in the shooting scene corresponding to the first image, the terminal can shoot a panoramic image comprising a plurality of objects in real time. Therefore, under the condition that an object is required to appear in a panoramic image for multiple times by a user, the tedious process that the shooting object continuously moves in the shooting scene can be avoided, and the problem that the quality of the panoramic image shot by the terminal is double images or distorted due to the fact that the shooting object continuously moves in the shooting scene is avoided. That is, the quality of the panoramic image photographed by the terminal can be improved.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an image generating method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a terminal display content according to an embodiment of the present invention;
fig. 4 is a second schematic diagram of the display content of the terminal according to the embodiment of the present invention;
fig. 5 is a third schematic diagram of the display content of the terminal according to the embodiment of the present invention;
FIG. 6 is a fourth schematic diagram of the terminal display content according to the embodiment of the present invention;
fig. 7 is a schematic structural diagram of a possible terminal according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a hardware structure of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. "plurality" means two or more than two.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first object and the second object, etc. are for distinguishing different objects, not for describing a particular order of the objects.
The image generation method and the terminal provided by the embodiment of the invention can shoot a first image; displaying at least two target objects on the first image, wherein the at least two target objects are generated based on the first object in the first image, or the at least two target objects comprise at least one second object in the second image; subsequently, a target image may be generated based on the first image and the at least two target objects. Based on the method, the terminal can obtain a target image comprising at least two target objects according to at least one second object in the second image or the first object in the first panoramic image. Without the need of continuously moving the shooting object (such as a person) in the shooting scene corresponding to the first image, the terminal can shoot a panoramic image comprising a plurality of objects in real time. Therefore, under the condition that an object is required to appear in a panoramic image for multiple times by a user, the tedious process that the shooting object continuously moves in the shooting scene can be avoided, and the problem that the quality of the panoramic image shot by the terminal is double images or distorted due to the fact that the shooting object continuously moves in the shooting scene is avoided. That is, the quality of the panoramic image photographed by the terminal can be improved.
The terminal in the embodiment of the invention can be a mobile terminal or a non-mobile terminal. The mobile terminal may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc.; the non-mobile terminal may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present invention are not particularly limited.
It should be noted that, in the image generation method provided in the embodiment of the present invention, the execution main body may be a terminal, or a Central Processing Unit (CPU) of the terminal, or a control module in the terminal for executing the image generation method. In the embodiment of the present invention, a terminal executes an image generation method as an example, and the image generation method provided in the embodiment of the present invention is described.
The terminal in the embodiment of the present invention may be a terminal having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the image generation method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application. For example, applications such as a system setup application, a system chat application, and a system camera application. And the third-party setting application, the third-party camera application, the third-party chatting application and other application programs.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the image generation method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the image generation method may operate based on the android operating system shown in fig. 1. Namely, the processor or the terminal can implement the image generation method provided by the embodiment of the invention by running the software program in the android operating system.
The following describes the image generation method provided by the embodiment of the present invention in detail with reference to the flowchart of the image generation method shown in fig. 2. Wherein, although the logical order of the image generation methods provided by embodiments of the present invention is shown in a method flow diagram, in some cases, the steps shown or described may be performed in an order different than here. For example, the image generation method shown in fig. 2 may include steps 201 to 203:
step 201, the terminal shoots a first image.
Wherein the first image may be a panoramic image.
Optionally, the terminal is installed with an application program such as a system camera application program or a third-party camera application program. Among them, one camera application may provide a panorama photographing function for supporting the terminal to perform a function of photographing a panorama image through the camera application.
Illustratively, in the embodiment of the present invention, when the terminal executes the panoramic shooting function in the camera application, a panoramic shooting process corresponding to the panoramic shooting function may be executed. In a panoramic shooting process, the terminal can control the camera to acquire images of the same shooting scene at different view angles so as to obtain a panoramic image of the shooting scene. Of course, the shooting scene may include a shooting object such as a person or a still object.
Optionally, in the embodiment of the present invention, the terminal may have one or more cameras.
Alternatively, one camera in the terminal may be a conventional camera fixed in the terminal and incapable of rotating (i.e., rotating) or moving independently, that is, when the terminal is fixed, the viewing angle of the conventional camera is not changed. When the user controls the terminal to execute the panoramic shooting function in the camera application program, the terminal can prompt the user to move the terminal in a panoramic shooting process so as to move or rotate a conventional camera in the terminal. Therefore, the terminal can acquire a plurality of images of the current shooting scene at different view angles through the camera and integrate the images to obtain a panoramic image.
In a panoramic shooting process, the terminal can prompt a user to move or rotate along a certain direction and keep a certain moving speed. Namely, prompting the user to control the camera in the terminal to move or rotate so as to scan the image of the current shooting scene through the camera. For example, the terminal may prompt the user through some prompt message, such as "please continue to move the terminal during panorama shooting". In addition, the terminal may include information such as an arrow indicating a moving direction of the terminal, and an auxiliary line indicating movement of the terminal in a photographing preview interface (such as an interface of a view finder of a camera application) of the panorama photographing progress displayed on the screen.
For example, the user may control the terminal to move from top to bottom based on the photographing preview interface currently displayed by the terminal, or the user may control the terminal to move from left to right based on the photographing preview interface currently displayed by the terminal, so as to control the terminal to photograph the panoramic image through the camera.
In addition, the terminal may scan an image in a top-to-bottom direction based on the currently displayed photographing preview interface of the terminal, or in a left-to-right direction based on the currently displayed photographing preview interface of the terminal.
Further, when the speed of the user for controlling the terminal to move is too fast, the terminal may display other prompt information in the current shooting preview interface to prompt the user to slow down the speed of the mobile terminal. For example, the other prompt message is "deceleration".
Optionally, one camera in the terminal may be a rotary camera (i.e., a rotatable camera), and when the terminal is fixed, the rotary camera may rotate the camera itself, so that the camera acquires images with different viewing angles. When the user controls the terminal to execute the panoramic shooting function in the camera application program, the terminal can control the rotating camera to rotate in a panoramic shooting process so as to acquire a plurality of images of a current shooting scene at different view angles through the rotating camera and integrate the images to obtain a panoramic image.
Optionally, the multiple cameras in the terminal may be multiple cameras fixed in the terminal, and the viewing angle of each camera is different. When the user controls the terminal to execute the panoramic shooting function in the camera application program, the terminal can control the plurality of cameras to respectively acquire a plurality of images with different view angles in one panoramic shooting process, and integrate the plurality of images to obtain a panoramic image (such as a first image).
Exemplarily, as shown in fig. 3, a schematic diagram of displaying content for a terminal according to an embodiment of the present invention is provided. Fig. 3 shows an interface 31 of a camera application displayed by the terminal, where the interface 31 includes a "panorama" control 311 and a shooting control 312. Specifically, a user selects the "panorama" control 311, so that the "panorama" control 311 is in a selected state, the terminal is triggered to start a panorama shooting function, and a panorama shooting process is run. Specifically, when the "panorama" control 311 is in the selected state, the interface 31 displayed by the terminal includes panorama shooting instruction information such as an arrow indicating a direction in which the terminal moves, an auxiliary line, and a progress bar, and the interface 31 further includes a prompt message "please continue moving the terminal during panorama shooting". Subsequently, after the terminal receives the user input to the photographing control 312, photographing of the panoramic image may be started.
In addition, the lower left corner of the interface 31 shown in fig. 3 also includes an entry 313 for an album, and thumbnails displayed on the entry 313 are thumbnails of images taken last time by the camera application of the terminal. At this time, after the terminal receives a trigger input to the entry 313 from the user, an interface of the album application including the image obtained by the last shooting may be displayed on the screen.
Alternatively, after the user controls the terminal to capture the first image, the terminal may preview-display the first image on the screen.
Optionally, when the terminal obtains the first image by shooting, the terminal may directly display the first image on the screen, that is, the first image is displayed on the shooting preview interface. Or, when the terminal finishes shooting the first image, the first image may be stored in the album application program, and when the terminal receives an input that the user triggers to display the first image, the first image is displayed on the current interface. For example, when the terminal currently displays a shooting preview interface on a screen, if the terminal receives an input of a user to an album entry at the lower left corner of the shooting preview interface, a first image is displayed on the current interface.
Step 202, the terminal displays at least two target objects on the first image.
Wherein the at least two target objects are generated based on the first object in the first image or the at least two target objects comprise at least one second object in the second image.
It should be noted that, in the embodiment of the present invention, an object (e.g., a first object) in an image may be a foreground in the image, such as a person or a still in the image. Specifically, one image may include a foreground and a background, and the terminal may identify the foreground and the background in one image.
Generally, the color of the foreground and the color of the background in the image are clearly different, and the terminal can identify the foreground (i.e. the object) in the image by adjusting the threshold value of the binarization function.
For example, the terminal may perform a graying process and a binarization process on an image including an object (e.g., a first panoramic image including a person) to obtain a binarized image of the image. Wherein the terminal can identify an object (e.g., a first object) from the binarized image by adjusting a threshold of the binarization function.
Of course, the terminal may recognize the foreground (i.e., the object) in one image through other image recognition methods. For example, the terminal may identify an object in an image through an algorithm of edge detection.
Optionally, the terminal may determine a foreground (i.e., an object) in an image according to a pixel value of each pixel point in the image. Such as pixel values of the foreground in an image, satisfy a range of values.
Alternatively, the terminal may determine the foreground (i.e., the object) in an image according to the contrast of the image. For example, the difference between the contrast of the foreground in an image and the contrast of the background in the image satisfies a certain range of values.
Wherein the second image is different from the first image. The second image can be obtained by shooting the terminal in real time in a panoramic shooting process of shooting the first image, or the second image is obtained by the terminal from local resources or cloud resources.
It is to be understood that the at least two target objects include a first object and other objects derived based on the first object. In addition, at least one second object in a second image of the at least two target objects, and other objects derived based on the at least one second object.
Optionally, after the terminal obtains the first image by shooting, the terminal may detect and acquire at least two target objects under a trigger input of a user, or automatically.
Specifically, the image generation method provided by the embodiment of the present invention may be applied to the following application scenario 1 or application scenario 2:
the application scene 1, the at least two target objects are generated based on a first object in the first image.
Specifically, in a panoramic shooting process in the camera application, an image shot by the terminal in real time is a first image. Subsequently, the terminal may identify a first object in the first image and derive at least two target objects based on the first object. Thus, the terminal may display at least two target objects on the first image.
Exemplarily, as shown in fig. 4, a schematic diagram of displaying content for a terminal according to an embodiment of the present invention is provided. The panoramic image 41 shown in fig. 4 (a) includes an object 411 therein. Here, the panoramic image 41 may be a first image, and the object 411 may be a first object. Alternatively, in the case where the terminal displays the panoramic image 41 on the screen, the terminal may obtain the object 412 from the object 411 to display the object 411 and the object 412 on the panoramic image 41 shown in (b) in fig. 4, that is, to display at least two target objects on the first image. Where object 411 is the same as object 412.
Similarly, the terminal may obtain other objects besides the object 412 according to the object 411 to obtain at least two target objects, which is not specifically limited in this embodiment of the present invention.
The application scene 2, the at least two target objects comprise at least one second object in the second image.
Optionally, after the terminal obtains the first image by shooting, the terminal may directly prompt the user whether to obtain the second image; or, the terminal may prompt the user whether to acquire the second image again when detecting that the current first image does not include any object.
Optionally, when the terminal displays the target image on the shooting preview interface, the shooting preview interface may further include the first image.
The second image may be shot by the terminal in real time triggered by the user, for example, shot by the terminal triggered by the user inputting a shooting control (e.g., the shooting control 312) in the shooting preview interface. Alternatively, the second image may be selected from an album application for the user-triggered terminal. For example, the user inputs an entry (e.g., entry 313) of an album application in the photographing preview interface to select from the album application.
Exemplarily, as shown in fig. 5, a schematic diagram of displaying content for a terminal according to an embodiment of the present invention is provided. A panoramic image 51 and an image 52 displayed on the photographing preview interface shown in fig. 5 (a), the panoramic image 51 may be a first image, and the image 52 may be a second image. The panoramic image 51 includes no object therein, and the image 52 includes an object 521 therein.
Alternatively, in the case where the terminal displays the panoramic image 51 on the screen, the terminal may obtain the object 522 and the object 523 from the object 521 in the image 52 to display the object 522 and the object 523 on the panoramic image 51 shown in (b) in fig. 5, that is, to display at least two target objects on the first image. Among them, the object 521, the object 522, and the object 523 are the same.
Similarly, the terminal may obtain other objects besides the object 522 and the object 523 according to the object 521 to obtain at least two target objects, which is not specifically limited in this embodiment of the present invention.
Similarly, in the case where a plurality of objects are included in the second image, the terminal may regard each of the plurality of objects as one second object to display the objects obtained from the respective second objects on the first image display, respectively.
Optionally, when the terminal displays at least two target objects on the first image, the first image itself may be used as one layer, the at least two target objects may be used as another layer, and the two layers are displayed in an overlapping manner.
Further, optionally, in the application scenario 1, the layers where the at least two target objects are located may be layers where other objects than the first object are located in the at least two target objects.
Optionally, in this embodiment of the present invention, at least two target objects displayed by the terminal on the first image include not only an object generated based on the first object in the first image, but also an object generated based on at least one second object in the second image, which is not described again in this embodiment of the present invention.
Step 203, the terminal generates a target image based on the first image and the at least two target objects.
The terminal can superimpose the layer where the first image is located and the layers where the at least two target objects are located to obtain the target image.
Optionally, after the terminal generates the target image, the target image may be saved, for example, the target image may be saved in a storage area corresponding to the album application. In this way, subsequent users can be supported to view the target image through the photo album application program.
Alternatively, the terminal may generate the target image based on the first image and the at least two target objects under a trigger input of a user or automatically. For example, when the terminal displays the preview image, a determination control for triggering the terminal to generate the panoramic image may be displayed. After the terminal receives user input to the determination control, the terminal may generate a target image. In addition, the terminal may automatically generate the target image in a case where a time period for displaying at least two target objects on the first image reaches a preset time period (e.g., 15 seconds).
Further, optionally, during the process of generating the target image, the terminal may save the first image and/or the second image, for example, save the first image and/or the second image in a storage area corresponding to the album application. In this manner, subsequent users may be supported in viewing the first image and/or the second image through the album application.
It should be noted that, the image generation method provided by the embodiment of the present invention may capture a first image; displaying at least two target objects on the first image, wherein the at least two target objects are generated based on the first object in the first image, or the at least two target objects comprise at least one second object in the second image; subsequently, a target image may be generated based on the first image and the at least two target objects. Based on the method, the terminal can obtain a target image comprising at least two target objects according to at least one second object in the second image or the first object in the first panoramic image. Without the need of continuously moving the shooting object (such as a person) in the shooting scene corresponding to the first image, the terminal can shoot a panoramic image comprising a plurality of objects in real time. Therefore, under the condition that an object is required to appear in a panoramic image for multiple times by a user, the tedious process that the shooting object continuously moves in the shooting scene can be avoided, and the problem that the quality of the panoramic image shot by the terminal is double images or distorted due to the fact that the shooting object continuously moves in the shooting scene is avoided. That is, the quality of the panoramic image photographed by the terminal can be improved.
In a possible implementation manner, in the image generation method provided by the embodiment of the present invention, at least two target objects are generated based on a first object in the first image. Specifically, the step 202 can be implemented by the steps 202a and 202 b:
step 202a, the terminal receives a first input of a user to a first object in a first image.
Step 202b, the terminal responds to the first input, the first object in the first image is copied at least once to obtain at least two target objects, and each target object is displayed to a target position input by the first input.
In the application scenario 1, the terminal may copy the first object and paste and display at least one copied first object on the first image. Wherein the at least two target objects include a first object and at least one copied first object.
It is understood that, when the terminal copies the first object from the first image, the first object included in the first image may continuously exist and a display position of the first object may not be changed.
In addition, the terminal may cut and copy the first object, and paste and display at least two copied first objects on the first image. Wherein, the at least two target objects are the at least two cut first objects.
It is understood that when the terminal cuts the first object from the first image, the first object is still included in the first image, and the display position of the first object may be changed.
Wherein each object of the at least two target objects is obtained by executing the paste operation on the cut first object. Or, a part of the at least two target objects is obtained by performing a paste operation on the cut first object, and another part of the at least two target objects is obtained by copying an object of the part of the objects and then performing the paste operation.
Alternatively, the display positions of the objects, except the first object, of the at least two target objects on the first image may be determined by user triggering, or determined by the terminal according to a preset arrangement rule, or determined randomly by the terminal. Illustratively, the preset arrangement rule is to uniformly distribute a plurality of objects in the image.
Illustratively, each target object of the at least two target objects is displayed at the target position input by the first input, i.e. the display position of each target object is triggered by the user.
Optionally, the first input may include a first sub-input (i.e., input 1) for triggering the terminal to recognize the input of the first object, and a second sub-input (i.e., input 2) for triggering the terminal to obtain the input of at least two target objects.
It should be noted that the screen of the terminal provided in the embodiment of the present invention may be a touch screen, and the touch screen may be configured to receive an input from a user and display a content corresponding to the input to the user in response to the input. The first sub-input may be a touch screen input, a fingerprint input, a gravity input, a key input, or the like. The touch screen input is input such as press input, long press input, slide input, click input, and hover input (input by a user near the touch screen) of a touch screen of the terminal by the user. The fingerprint input is input by a user to a sliding fingerprint, a long-press fingerprint, a single-click fingerprint, a double-click fingerprint and the like of a fingerprint recognizer of the terminal. The gravity input is input such as shaking of a user in a specific direction of the terminal, shaking of a specific number of times, and the like. The key input corresponds to a single-click input, a double-click input, a long-press input, a combination key input, and the like of the user for a power key, a volume key, a Home key, and the like of the terminal. Specifically, the embodiment of the present invention does not specifically limit the manner of the first sub-input, and may be any realizable manner.
For example, the first sub-input may be a user input to the first object on the first image, such as a long press input to the first object. Of course, the first sub-input may also be other possible inputs, for example, the first sub-input may also be a sliding input in which a sliding track of the user on the screen is a circle, which is not particularly limited in this embodiment of the present invention.
Optionally, after the terminal determines the first object, the terminal may display the first object according to a preset display effect. At this time, the user can know that the terminal determines the first object through the display effect of the first object.
For example, the preset display effect may include at least one of the following: highlighting an object (e.g., a first object), floating an object, adding a border to an object (e.g., a border of a predetermined thickness, color, line type, shape).
Optionally, in this embodiment of the present invention, the first object displayed according to the preset rule may be displayed at a preset position (for example, a center position of the first image) in the first image, or at a position where the first object included in the first image itself is located.
It is understood that, in the case that the first image includes a plurality of objects, the user may perform the first sub-input on each of the plurality of objects respectively to trigger the terminal to determine the plurality of first objects respectively.
Specifically, the user may trigger the terminal to take all or part of the plurality of objects in the first image as the first object.
Further, the description of the input form of the second sub-input may refer to the related description of the first sub-input in the above embodiments, and this is not described in detail in the embodiments of the present invention.
For example, the second sub-input may be a drag input of the first object (e.g., the copied first object or the cut first object) displayed according to a preset rule by the user. Specifically, the user may drag the first object to an arbitrary display position on the first image to realize generation of at least two target objects based on the first object and display at least two second target objects on the first image. Specifically, after the terminal drags the first object to a display position in the first image through a second sub-input, the copied or clipped first object may be pasted and displayed at the display position. At this time, the display position of each of the at least two target objects on the first image is determined by user triggering.
Wherein the terminal may copy the first object or cut the first object to display the at least two target objects on the first panorama image. Therefore, the diversity of the terminal for acquiring the at least two target objects is improved.
Optionally, in this embodiment of the present invention, the user may select at least one second object from the second image through one input trigger terminal, and generate at least two target objects based on the at least one second object, that is, the at least two target objects include the at least one second object in the second image.
In the application scenario 2, the terminal may paste and display an object copied from each second object on the first image respectively for at least one of the second objects to generate and display at least two target objects.
Specifically, after copying or cropping the second object from the second image, the terminal may first display the copied or cropped second object on the first image, for example, on the center of the first panoramic image. Then, other objects obtained based on the second object are displayed on the first image again to display at least two target objects.
Similarly, for the detailed description that the user generates at least two target objects based on at least one second object in the second image through one input trigger terminal, reference may be made to the related description that the user generates at least two target objects based on the first object in the first image through the first input trigger terminal in the above embodiment, which is not described again in this embodiment of the present invention.
It should be noted that, with the image generation method provided in the embodiment of the present invention, a user may trigger the terminal to determine the first object according to a user requirement through the first input, and display at least two target objects generated based on the first object on the first panoramic image. Therefore, the controllability of the target image generated by the terminal subsequently is improved, and the target image meets the requirements of the user.
In a possible implementation manner, the image generation method provided in the embodiment of the present invention may further include, after the step 201 and before the step 202 in the application scene 2, a step 201 a:
step 201a, the terminal shoots a second image, and the background of the second image is the same as at least part of the background in the first image.
The background of the second image is the same as at least part of the background of the first image, and the second image and the first image are images of the same shooting scene.
Optionally, the second image and the first image are shot by the terminal in the same shooting process.
Specifically, in a panoramic shooting process in the camera application, the images shot by the terminal in real time are the first image and the second image. Subsequently, the terminal may identify at least one second object in the second image and obtain at least two target objects according to the at least one second object. Thus, the terminal may display at least two target objects on the first image.
Optionally, in a panoramic shooting process in the camera application, after the terminal obtains the first image through shooting, the terminal may prompt the user to perform an input for triggering the terminal to shoot the second image.
For example, the terminal may display some prompt information such as "please capture a person image" or "whether to capture a person image" on the current capture preview interface to prompt the user to trigger the terminal to capture the target image. Wherein, the user's input to the shooting control (such as the shooting control 312) may trigger the terminal to shoot the second image. Subsequently, the terminal may display the second image on the current photographing preview interface. The terminal may display the first image and the second image on one interface simultaneously, or display the first image and the second image on different interfaces respectively.
Optionally, after the terminal obtains the first image by shooting, the terminal may directly prompt the user whether to shoot the second image; or, the terminal may prompt the user whether to take the second image again when detecting that the current first image does not include any object.
Exemplarily, referring to fig. 3 and fig. 5, as shown in fig. 6, a schematic diagram of a terminal display content provided by an embodiment of the present invention is shown. The subject is not included in the panoramic image 51 displayed on the shooting preview interface shown in (a) in fig. 6. The shooting preview interface shown in fig. 6 (b) also displays a prompt message "please shoot a person image" and a "determination" control. Wherein, after the user inputs the "determination" control and the photographing control 312 shown in (b) of fig. 6, the terminal may display the image 52 and the panoramic image 51 as shown in (a) of fig. 5. Wherein, the image 52 includes an object 521, and the object 521 can be used as the at least one second object. The camera application may run a photo taking process to support the terminal in taking images of the person.
Wherein the second image may be an image including an object, and the second image is not a panoramic image. In this way, even if the object in the shooting scene of the second image moves, the object in the second image shot by the terminal is not always ghost or distorted.
Optionally, the second image provided by the embodiment of the present invention may include one or more images, that is, one or more pictures.
It should be noted that, since the terminal may capture the first image and the second image in the same capturing process, the first image and the second image may correspond to the same capturing scene, that is, the background in the second image is at least partially the same as the background in the first image. Therefore, objects in the target images generated by the subsequent terminals can correspond to the same shooting scene, and the panoramic images generated by the terminals have real-time property and diversity.
In a possible implementation manner, in the image generation method provided by the embodiment of the present invention, the step 202 may be implemented by the steps 204 and 205:
and 204, the terminal acquires an ith second subregion which is the same as the background of the ith first subregion in the first image in the second image, wherein i is a positive integer.
It is to be understood that a plurality of second sub-regions may be included in the second image, the ith second sub-region being one of the plurality of second sub-regions, each of which may contain one or more second objects.
And step 205, the terminal displays at least two target objects on the first image based on the ith first sub-area and the ith second sub-area.
Specifically, after the terminal obtains the first image and the second image by shooting, the background of the first image and the background of the second image may be compared. The terminal can determine the background of each second object in the target image aiming at least one second object in the second image, namely, a second sub-area where the background is located is determined; then, the terminal checks whether the background of the first image comprises a background matched with the first background, namely, checks a first subregion of the first image, wherein the background of the first subregion is the same as that of the second subregion; if the background of the first image comprises a second subregion which is the same as the background of the second subregion, the terminal displays a corresponding second object on the first subregion of the first image so as to display at least two target objects.
Optionally, in this embodiment of the present invention, step 205 may be implemented by step 205 a:
in step 205a, the terminal replaces the image content of the ith first sub-area with the image content of the ith second sub-area, and the ith second sub-area includes the ith second object.
Specifically, the terminal replaces the image content of the ith first sub-area with the second object and the background in the ith second sub-area.
Therefore, after the terminal compares the ith first sub-area and the ith second sub-area with the same background, the image content of the ith first sub-area is replaced by the image content of the ith second sub-area, so that at least one second object in the second image can be displayed on the first image, namely at least two target images are displayed on the first image.
Optionally, in this embodiment of the present invention, step 205 may be implemented by step 205b and step 205 c:
step 205b, the terminal extracts the ith object image of the ith second object in the ith second sub-region.
Wherein the ith second object is an object in at least one second object in the second image.
Specifically, the terminal may determine an ith second sub-region and an ith first sub-region with the same background, and determine a region of the ith second object in the ith second sub-region by using techniques such as image segmentation.
And step 205c, the terminal displays the ith object image in a target area in the ith first sub-area, wherein the target area is an area corresponding to the display area of the ith second object in the second image in the ith first sub-area.
Specifically, after the terminal determines the area of the ith second object in the ith second sub-area, the target area of the ith object image in the ith first sub-area is determined. Namely, the area corresponding to the display area of the ith second object in the second image in the ith first sub-area is determined as the target area.
In this way, the terminal generates a target image by combining the i-th target image having a small size with the first image. Therefore, the trace of image synthesis in the target image is reduced, and the display effect of at least two target objects on the first image is improved, namely the quality of the panoramic image generated by the terminal is improved.
It should be noted that, in the image generation method provided in the embodiment of the present invention, the terminal may determine the display positions of the at least two target objects in the first image by comparing the background in the first image with the background in the second image. Therefore, the display effect of the at least two target objects on the first image is improved, namely the quality of the subsequently generated second panoramic image is improved.
In a possible implementation manner, in the image generating method provided by the embodiment of the present invention, the terminal includes a rotatable camera. Specifically, the step 201 may be implemented by the step 206, and before the step 202, the method may further include the step 207:
and step 206, the terminal controls the camera to rotate by a first angle according to the first direction, and shooting is carried out in the rotating process to obtain a first image.
Optionally, the rotatable camera may be a universal camera, i.e. the camera may be rotated in any direction.
In addition, alternatively, the rotatable camera may support a rotation angle in one direction (e.g., the first direction) of 0 to 360 degrees.
For example, the first direction is a direction in which the camera head rotates clockwise along a horizontal arc, and the first angle may be an angle between 0 and 180 degrees.
Optionally, when the terminal captures an image through the rotatable camera, the rotation angle of the camera may be recorded, so that each image captured by the terminal corresponds to one rotation angle (that is, the camera captures the image at the rotation angle).
Step 207, the terminal controls the camera to rotate by a second angle according to the first direction, and shooting is carried out in the rotating process to obtain a second image; wherein the second angle is less than or equal to the first angle.
Illustratively, the second angle may be an angle between 150 and 170 degrees.
In this way, it can be ensured that the same area exists in the first image as the background of the second image. Thus, the subsequent terminal may display both the at least one second object in the second image and the background on the first image to enable display of the at least two target objects on the first image.
Fig. 7 is a schematic structural diagram of a possible terminal according to an embodiment of the present invention. The terminal 70 shown in fig. 7 includes: a shooting module 701, a display module 702 and a generation module 703; a shooting module 701, configured to shoot a first image; a display module 702, configured to display at least two target objects on the first image obtained by the shooting module 701; a generating module 703, configured to generate a target image based on the first image and the at least two target objects displayed by the displaying module 702; wherein the at least two target objects are generated based on the first object in the first image or the at least two target objects comprise at least one second object in the second image.
Optionally, the at least two target objects are generated based on a first object in the first image; a display module 702, specifically configured to receive a first input of a first object in a first image from a user; in response to a first input, the first object in the first image is copied at least once, at least two target objects are obtained, and each target object is displayed to a target position input by the first input.
Optionally, the at least two target objects comprise at least one second object in the second image; the shooting module 701 is further configured to shoot a second image before the display module 702 displays at least two target objects on the first image, where a background of the second image is the same as at least a part of a background in the first image.
Optionally, the display module 702 is specifically configured to acquire an ith second sub-region, which is the same as the background of the ith first sub-region in the first image, in the second image; displaying at least two target objects on the first image based on the ith first sub-area and the ith second sub-area; wherein i is a positive integer.
Optionally, the display module 702 is specifically configured to replace the image content of the ith first sub-area with the image content of the ith second sub-area, where the ith second sub-area includes the ith second object.
Optionally, the display module 702 is specifically configured to extract an ith object image of an ith second object in the ith second sub-region; and displaying the ith object image in a target area in the ith first sub-area, wherein the target area is an area corresponding to the display area of the ith second object in the second image in the ith first sub-area.
Optionally, the terminal includes a rotatable camera; the shooting module 701 is specifically configured to control the camera to rotate by a first angle according to a first direction, and shoot in the rotating process to obtain a first image; the display module 702 controls the camera to rotate by a second angle according to the first direction before displaying at least two target objects on the first image, and shoots in the rotating process to obtain a second image; wherein the second angle is less than or equal to the first angle.
The terminal 70 provided in the embodiment of the present invention can implement each process implemented by the terminal in the foregoing method embodiments, and is not described here again to avoid repetition.
The terminal provided by the embodiment of the invention can shoot a first image; displaying at least two target objects on the first image, wherein the at least two target objects are generated based on the first object in the first image, or the at least two target objects comprise at least one second object in the second image; subsequently, a target image may be generated based on the first image and the at least two target objects. Based on the method, the terminal can obtain a target image comprising at least two target objects according to at least one second object in the second image or the first object in the first panoramic image. Without the need of continuously moving the shooting object (such as a person) in the shooting scene corresponding to the first image, the terminal can shoot a panoramic image comprising a plurality of objects in real time. Therefore, under the condition that an object is required to appear in a panoramic image for multiple times by a user, the tedious process that the shooting object continuously moves in the shooting scene can be avoided, and the problem that the quality of the panoramic image shot by the terminal is double images or distorted due to the fact that the shooting object continuously moves in the shooting scene is avoided. That is, the quality of the panoramic image photographed by the terminal can be improved.
Fig. 8 is a schematic diagram of a hardware structure of a terminal according to an embodiment of the present invention, where the terminal 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal configuration shown in fig. 8 is not intended to be limiting, and that the terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The input unit 104 is used for shooting a first image; a display unit 106 for displaying at least two target objects on the first image photographed by the input unit 104; a processor 110 for generating a target image from the first image and the at least two target objects of the display unit 106; wherein the at least two target objects are generated based on the first object in the first image or the at least two target objects comprise at least one second object in the second image.
The terminal provided by the embodiment of the invention can shoot a first image; displaying at least two target objects on the first image, wherein the at least two target objects are generated based on the first object in the first image, or the at least two target objects comprise at least one second object in the second image; subsequently, a target image may be generated based on the first image and the at least two target objects. Based on the method, the terminal can obtain a target image comprising at least two target objects according to at least one second object in the second image or the first object in the first panoramic image. Without the need of continuously moving the shooting object (such as a person) in the shooting scene corresponding to the first image, the terminal can shoot a panoramic image comprising a plurality of objects in real time. Therefore, under the condition that an object is required to appear in a panoramic image for multiple times by a user, the tedious process that the shooting object continuously moves in the shooting scene can be avoided, and the problem that the quality of the panoramic image shot by the terminal is double images or distorted due to the fact that the shooting object continuously moves in the shooting scene is avoided. That is, the quality of the panoramic image photographed by the terminal can be improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse web pages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 8, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal 100 or may be used to transmit data between the terminal 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system.
In addition, the terminal 100 includes some functional modules that are not shown, and thus, the detailed description thereof is omitted.
Preferably, an embodiment of the present invention further provides a terminal, which includes a processor 110, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements each process of the above-mentioned embodiment of the image generation method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image generation method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (9)

1. An image generation method applied to a terminal is characterized by comprising the following steps:
shooting a first image;
displaying at least two target objects on the first image;
generating a target image based on the first image and the at least two target objects;
wherein the at least two target objects are generated based on a first object in the first image or the at least two target objects comprise at least one second object in a second image;
the terminal comprises a rotatable camera;
the capturing a first image includes:
controlling the camera to rotate by a first angle according to a first direction, and shooting in the rotating process to obtain a first image;
before displaying at least two target objects on the first image, the method further comprises:
controlling the camera to rotate by a second angle according to the first direction, and shooting in the rotating process to obtain a second image;
wherein the second angle is less than or equal to the first angle.
2. The method of claim 1, wherein the at least two target objects are generated based on a first object in the first image;
the displaying, on the first image, at least two target objects includes:
receiving a first input of a user to a first object in the first image;
in response to the first input, copying the first object in the first image at least once, obtaining at least two target objects, and displaying each target object to a target position input by the first input.
3. The method of claim 1, wherein the at least two target objects comprise at least one second object in a second image;
before displaying at least two target objects on the first image, the method further comprises:
and shooting a second image, wherein the background of the second image is the same as at least part of the background in the first image.
4. The method of claim 3, wherein displaying at least two target objects on the first image comprises:
acquiring an ith second subregion which is the same as the background of the ith first subregion in the first image in the second image;
displaying at least two target objects on the first image based on the ith first sub-area and the ith second sub-area;
wherein i is a positive integer.
5. The method of claim 4, wherein displaying at least two target objects on the first image based on the ith first sub-region and the ith second sub-region comprises:
replacing the image content of the ith first sub-region with the image content of the ith second sub-region, wherein the ith second sub-region comprises an ith second object.
6. The method of claim 4, wherein displaying at least two target objects on the first image based on the ith first sub-region and the ith second sub-region comprises:
extracting an ith object image of an ith second object of the ith second sub-region;
and displaying the ith object image in a target area in the ith first sub-area, wherein the target area is an area corresponding to the display area of the ith second object in the second image in the ith first sub-area.
7. A terminal, characterized in that the terminal comprises: the device comprises a shooting module, a display module and a generation module;
the shooting module is used for shooting a first image;
the display module is used for displaying at least two target objects on the first image obtained by the shooting module;
the generating module is used for generating a target image based on the first image and the at least two target objects displayed by the display module;
wherein the at least two target objects are generated based on a first object in the first image or the at least two target objects comprise at least one second object in a second image;
the terminal comprises a rotatable camera;
the shooting module is specifically used for controlling the camera to rotate by a first angle according to a first direction and shooting in the rotating process to obtain a first image; the display module controls the camera to rotate by a second angle according to the first direction before displaying at least two target objects on the first image, and shooting is carried out in the rotating process to obtain a second image;
wherein the second angle is less than or equal to the first angle.
8. A terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image generation method according to any one of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image generation method according to any one of claims 1 to 6.
CN201910472529.6A 2019-05-31 2019-05-31 Image generation method and terminal Active CN110233966B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910472529.6A CN110233966B (en) 2019-05-31 2019-05-31 Image generation method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910472529.6A CN110233966B (en) 2019-05-31 2019-05-31 Image generation method and terminal

Publications (2)

Publication Number Publication Date
CN110233966A CN110233966A (en) 2019-09-13
CN110233966B true CN110233966B (en) 2021-06-15

Family

ID=67858330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910472529.6A Active CN110233966B (en) 2019-05-31 2019-05-31 Image generation method and terminal

Country Status (1)

Country Link
CN (1) CN110233966B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115988322A (en) * 2022-11-29 2023-04-18 北京百度网讯科技有限公司 Method and device for generating panoramic image, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102076629B1 (en) * 2013-06-18 2020-02-12 삼성전자주식회사 Method for editing images captured by portable terminal and the portable terminal therefor
CN105306862B (en) * 2015-11-17 2019-02-26 广州市英途信息技术有限公司 A kind of scene video recording system based on 3D dummy synthesis technology, method and scene real training learning method
CN108495029B (en) * 2018-03-15 2020-03-31 维沃移动通信有限公司 Photographing method and mobile terminal
CN108174109B (en) * 2018-03-15 2020-09-25 维沃移动通信有限公司 Photographing method and mobile terminal
CN108984677B (en) * 2018-06-28 2021-03-09 维沃移动通信有限公司 Image splicing method and terminal

Also Published As

Publication number Publication date
CN110233966A (en) 2019-09-13

Similar Documents

Publication Publication Date Title
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
CN111541845B (en) Image processing method and device and electronic equipment
CN109361869B (en) Shooting method and terminal
CN109639969B (en) Image processing method, terminal and server
CN110602401A (en) Photographing method and terminal
CN111597370B (en) Shooting method and electronic equipment
CN111010512A (en) Display control method and electronic equipment
CN111107269B (en) Photographing method, electronic device and storage medium
CN111010511B (en) Panoramic body-separating image shooting method and electronic equipment
CN109413333B (en) Display control method and terminal
CN111432195A (en) Image shooting method and electronic equipment
CN108924422B (en) Panoramic photographing method and mobile terminal
CN110798621A (en) Image processing method and electronic equipment
CN109246351B (en) Composition method and terminal equipment
CN110769174A (en) Video viewing method and electronic equipment
CN108616687B (en) Photographing method and device and mobile terminal
CN111405117A (en) Control method and electronic equipment
CN110769154B (en) Shooting method and electronic equipment
CN111050069B (en) Shooting method and electronic equipment
CN111031221B (en) Shooting method and electronic equipment
CN108881721A (en) A kind of display methods and terminal
CN110086998B (en) Shooting method and terminal
CN111432122B (en) Image processing method and electronic equipment
CN110913133B (en) Shooting method and electronic equipment
CN110493457B (en) Terminal device control method and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant