WO2020168859A1 - 拍照方法及终端设备 - Google Patents

拍照方法及终端设备 Download PDF

Info

Publication number
WO2020168859A1
WO2020168859A1 PCT/CN2020/071823 CN2020071823W WO2020168859A1 WO 2020168859 A1 WO2020168859 A1 WO 2020168859A1 CN 2020071823 W CN2020071823 W CN 2020071823W WO 2020168859 A1 WO2020168859 A1 WO 2020168859A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
input
terminal device
preview
Prior art date
Application number
PCT/CN2020/071823
Other languages
English (en)
French (fr)
Inventor
陈琳琳
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2020168859A1 publication Critical patent/WO2020168859A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the embodiments of the present invention relate to the technical field of terminals, and in particular to a photographing method and terminal equipment.
  • the existing camera function has the following problems: when the photographer takes a photo of the subject, it may need to take multiple times to obtain a photo that satisfies the subject; or, when the user needs to take a lot of objects, it may Some objects are not in the preview image, so the object cannot be captured.
  • the photographing function of the terminal device in the related art is not flexible enough.
  • the embodiments of the present disclosure provide a photographing method and a terminal device to solve the problem that the photographing function of the terminal device is not flexible enough in related technologies.
  • embodiments of the present disclosure provide a photographing method, which is applied to a terminal device having a first screen and a second screen, and the method includes:
  • embodiments of the present disclosure provide a terminal device, the terminal device having a first screen and a second screen, the terminal device including: a receiving module, a display module, and an acquisition module;
  • the receiving module is configured to receive a user's first input on the first screen; the display module is configured to display a first preview image on the first screen in response to the first input received by the receiving module, and The second screen displays a second preview image, the first preview image includes a first object, and the second preview image includes a second object; the acquisition module is configured to display the first preview according to the display module
  • the image and the second preview image are used to obtain a target image, the target image includes a target object, and the target object includes at least one of the first object and the second object.
  • embodiments of the present disclosure provide a terminal device, including a processor, a memory, and a computer program stored on the memory and capable of running on the processor.
  • the computer program is executed by the processor to achieve the following The steps of the photographing method in one aspect.
  • embodiments of the present disclosure provide a computer-readable storage medium storing a computer program on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the photographing method in the first aspect are implemented.
  • the terminal device may receive a user's first input on the first screen; in response to the first input, display the first preview image on the first screen, and display the first preview image on the second screen.
  • the terminal device can acquire the target image according to the first preview image in the first screen and the second preview image in the second screen. That is, the terminal device uses the first screen and the second screen to take pictures. Compared with the single-screen photography method, the user can obtain a satisfactory target image more quickly, which can solve the problem that the camera function of the terminal device in related technologies is not flexible enough. .
  • FIG. 1 is a schematic structural diagram of a possible Android operating system provided by an embodiment of the present disclosure
  • FIG. 3 is the second flowchart of the photographing method provided by an embodiment of the disclosure.
  • FIG. 5 is one of the schematic diagrams of the interface of the photographing method provided by the embodiments of the disclosure.
  • FIG. 6 is the fourth flowchart of the photographing method provided by an embodiment of the disclosure.
  • FIG. 7 is the fifth flowchart of the photographing method provided by an embodiment of the disclosure.
  • FIG. 10 is a schematic structural diagram of a terminal device provided by an embodiment of the disclosure.
  • FIG. 11 is a schematic diagram of hardware of a terminal device provided by an embodiment of the disclosure.
  • first”, “second”, “third”, and “fourth” in the specification and claims of the present disclosure are used to distinguish different objects, rather than describing a specific order of objects.
  • first input, the second input, the third input, and the fourth input are used to distinguish different inputs, rather than to describe a specific order of inputs.
  • words such as “exemplary” or “for example” are used as examples, illustrations, or illustrations. Any embodiment or design solution described as “exemplary” or “for example” in the embodiments of the present disclosure should not be construed as being more preferable or advantageous than other embodiments or design solutions. To be precise, words such as “exemplary” or “for example” are used to present related concepts in a specific manner.
  • multiple refers to two or more than two, for example, multiple processing units refers to two or more processing units; multiple elements Refers to two or more elements, etc.
  • the terminal device can receive a user's first input on the first screen; in response to the first input, display the first screen on the first screen.
  • Preview image displaying a second preview image on the second screen, the first preview image includes a first object, and the second preview image includes a second object; according to the first preview image and the second preview image , Acquire a target image, the target image includes a target object, and the target object includes at least one of the first object and the second object.
  • the terminal device can acquire the target image according to the first preview image in the first screen and the second preview image in the second screen. That is, the terminal device uses the first screen and the second screen to take pictures. Compared with the single-screen photography method, the user can obtain a satisfactory target image more quickly, which can solve the problem that the camera function of the terminal device in related technologies is not flexible enough. .
  • the following takes the Android operating system as an example to introduce the software environment applied by the photographing method provided in the embodiments of the present disclosure.
  • FIG. 1 it is a schematic structural diagram of a possible Android operating system provided by an embodiment of the present disclosure.
  • the architecture of the Android operating system includes 4 layers, namely: application layer, application framework layer, system runtime library layer, and kernel layer (specifically, it may be the Linux kernel layer).
  • the application layer includes various applications (including system applications and third-party applications) in the Android operating system.
  • the application framework layer is the framework of the application. Developers can develop some applications based on the application framework layer while complying with the development principles of the application framework.
  • the system runtime layer includes a library (also called a system library) and an Android operating system runtime environment.
  • the library mainly provides various resources needed by the Android operating system.
  • the Android operating system operating environment is used to provide a software environment for the Android operating system.
  • the kernel layer is the operating system layer of the Android operating system and belongs to the lowest level of the Android operating system software level.
  • the kernel layer is based on the Linux kernel to provide core system services and hardware-related drivers for the Android operating system.
  • developers can develop a software program that implements the photographing method provided by the embodiment of the present disclosure based on the system architecture of the Android operating system shown in FIG. 1, so that the photographing method It can run based on the Android operating system as shown in Figure 1. That is, the processor or the terminal can implement the photographing method provided in the embodiments of the present disclosure by running the software program in the Android operating system.
  • the terminal device in the embodiment of the present disclosure may be a mobile terminal device or a non-mobile terminal device.
  • the mobile terminal device can be a mobile phone, tablet computer, notebook computer, palmtop computer, vehicle-mounted terminal, wearable device, ultra-mobile personal computer (UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc.
  • the non-mobile terminal device may be a personal computer (PC), a television (television, TV), a teller machine, or a self-service machine, etc.; the embodiment of the present disclosure does not specifically limit it.
  • the terminal device may be a multi-screen terminal device, such as a double-sided screen terminal device, a folding screen terminal device, etc., which is not limited in the embodiment of the present disclosure.
  • the execution subject of the photographing method provided by the embodiments of the present disclosure may be the aforementioned terminal device (including mobile terminal devices and non-mobile terminal devices), or may be a functional module and/or functional entity in the terminal device that can implement the method.
  • the location can be determined according to actual usage requirements, and the embodiments of the present disclosure are not limited.
  • the following takes a terminal device as an example to illustrate the photographing method provided in the embodiment of the present disclosure.
  • an embodiment of the present disclosure provides a photographing method, which is applied to a terminal device having a first screen and a second screen.
  • the method may include the following steps 201 to 203.
  • Step 201 The terminal device receives a user's first input on the first screen.
  • the first input may be a click input of the user on the first screen, or a sliding input of the user on the first screen, or other feasible input, which is not limited in the embodiment of the present disclosure.
  • the above-mentioned click input may be any number of click operations, for example, it may be a single click operation, a double-click operation, etc.; it may be a click operation of any duration, for example, a short press with an operation duration less than or equal to a preset duration
  • the operation can be a long press operation with the operation duration greater than or equal to the preset duration, etc.
  • the aforementioned sliding operation may be a sliding operation in any direction, such as a leftward sliding operation, a rightward sliding operation, an upward sliding operation, or a downward sliding operation.
  • Step 202 In response to the first input, the terminal device displays a first preview image on the first screen, and displays a second preview image on the second screen.
  • the first preview image includes a first object
  • the second preview image includes a second object
  • the first object and the second object may be the same or different, which is not limited in the embodiment of the present disclosure.
  • the first image and the second object are objects that the user needs to photograph.
  • the first image and the second object may be at least one of the following: human, animal, and object.
  • the first preview image and the second preview image generally refer to preview images, that is, the first preview image may be an image that changes with the movement of the first object or the movement of the terminal device, and the second preview image may be the same as the second object.
  • the image changes due to the movement of the terminal or the movement of the terminal device.
  • the first preview image and the second preview image may be the same or different, which is not limited in the embodiment of the present disclosure.
  • the first preview image and the second preview image at the same time are the same, and the first preview image and the second preview image at different times may be the same , May also be different; when the first object and the second object are different, the first preview image and the second preview image are different.
  • the first input is an operation of the user clicking "dual-screen camera mode" on the first screen
  • the terminal device receives the user's first input, and in response to the first input, the terminal device displays the first input on the first screen.
  • a preview image, and a second preview image is displayed on the second screen.
  • Step 203 The terminal device acquires a target image according to the first preview image and the second preview image.
  • the target image includes a target object, and the target object includes at least one of the first object and the second object.
  • the terminal device takes a picture according to the first preview image and the second preview image to obtain a target image
  • the target image is an image that the user needs to take.
  • the user can also set various photographing effects, such as portrait effects, adding expressions, pendants, etc., which are not limited in the embodiment of the present disclosure.
  • step 203 when the target condition is met, step 203 is executed, and the target condition includes the first target condition and the second target condition.
  • the target condition includes the first target condition and the second target condition.
  • this step 203 can be specifically implemented by the following steps 203a to 203c.
  • Step 203a The terminal device obtains a first image according to the first preview image.
  • the terminal device acquires the first image according to the first preview image.
  • the method of acquiring the first image based on the first image reference may be made to the related art in the related art, which will not be repeated here.
  • Step 203b The terminal device obtains a second image according to the second preview image.
  • the terminal device acquires the first image according to the first preview image.
  • the method of acquiring the second image based on the second image reference may be made to the related art, which will not be repeated here.
  • Step 203c The terminal device obtains a target image according to the first image and the second image.
  • step 203a can be performed first, and then step 203b; or step 203b can be performed first, and then step 203a; or Step 203a and step 203b are performed at the same time; the embodiment of the present disclosure is not limited.
  • the second target condition and the first target condition may be the same or different.
  • the target condition includes receiving a second input from the user.
  • the second input may be voice input, touch screen input, key input, etc., which is not limited in the embodiment of the present disclosure.
  • the first target condition and the second target condition are both the second input of the user received.
  • the above steps 203a to 203c are: when the terminal device receives the second input of the user, the terminal device obtains the first image according to the first preview image, and obtains the second image according to the second preview image. Image, and then obtain a target image based on the first image and the second image.
  • the second input includes a first sub-input and a second sub-input
  • the first sub-input is the user's input on the first screen
  • the second sub-input is the user's input on the second screen
  • the first The target condition is that the first sub-input of the user is received
  • the second target condition is that the second sub-input of the user is received.
  • the above steps 203a to 203c are: when the terminal device receives the user's first sub-input, the terminal device obtains the first image according to the first preview image; the terminal device receives the user's second sub-input In the case of input, a second image is obtained according to the second preview image; and then a target image is obtained according to the first image and the second image.
  • the terminal device obtains the first image and the second image under the trigger of the user's second input, and obtains the target image according to the first image and the second image.
  • the target image can be obtained according to the needs of the user, thereby obtaining the target image It can better meet user needs and improve user experience.
  • the target condition includes: detecting that the posture information of the first object has not changed within a first preset time period, and detecting that the posture information of the second object has not changed within a second preset time period; Wherein, in the case where it is detected that the posture information of the first object has not changed within the first preset time period, the first image is acquired according to the first preview image, and when it is detected that the posture information of the first object is not changed during the second preset time period In a case where the posture information of the second object within does not change, a second image is acquired according to the second preview image.
  • the first preset duration and the second preset duration may be preset in advance, and the first preset duration and the second preset duration may be the same or different, which is not limited in the embodiment of the present disclosure.
  • the posture information of the first object includes at least position information and morphological information (including posture, action, expression, etc.) of each part of the first object, and may also include other information, which is not limited in the embodiment of the present disclosure.
  • the posture information of the second object includes at least position information and shape information of each part of the second object, and may also include other information, which is not limited in the embodiment of the present disclosure.
  • the first target condition is detecting that the posture information of the first object has not changed within the first preset time period
  • the second target condition is detecting that the posture information of the second object has not changed within the second preset time period.
  • the target condition includes: detecting that the posture information of the first object has not changed within the first preset time period, and detecting that the posture information of the second object has not changed within the second preset time period,
  • the first image and the second image can be obtained without user operation, and the target image can be obtained according to the first image and the second image, which can be inconvenient for the user to operate (the user's operation of the terminal device may cause a change in the posture, which may make the camera In the case of poor effect), obtaining a target image that meets the needs of the user can improve the user experience.
  • the first target condition may be receiving a first sub-input of the user
  • the second target condition may be detecting that the posture information of the second object has not changed within a second preset time period.
  • the first target condition may be detecting that the posture information of the first object has not changed within the first preset time period
  • the second target condition may be receiving a second sub-input of the user.
  • the terminal device when the terminal device receives the first sub-input of the user, it acquires the first image according to the first preview image, and the terminal device detects the posture of the second object within the second preset period of time Under the condition that the information has not changed, the second image is acquired according to the second preview image.
  • the user can set the first target condition and the second target condition according to requirements to obtain a relatively satisfactory photographing effect, which can improve user experience.
  • the target object includes the first object or the second object, that is, the first object and the second object are the same, but a first image obtained according to the first preview image and a second image obtained according to the second preview image It may be the same (the pose of the first object in the first image is the same as the pose of the second object in the image), or it may be different (the pose of the first object in the first image is different from the pose of the second object in the image) ;
  • this step 203c can be specifically implemented by the following steps 203c1-step 203c3.
  • Step 203c1 The terminal device displays the first image and the second image.
  • the terminal device may directly display the first image, or display the first image triggered by the user's input.
  • the terminal device may directly display the second image, or may be triggered by the user's input to display the second image. It is specifically determined according to actual usage requirements, and the embodiment of the present disclosure does not limit it.
  • Step 203c2 The terminal device receives the third input of the user.
  • the third input is an input for the user to select a target image from the first image and the second image.
  • the user can select an image that the user is satisfied with the photographing effect from the first image and the second image according to their own preferences, as the target image, which can improve user experience.
  • Step 203c3 In response to the third input, the terminal device obtains the target image corresponding to the third input.
  • the screen marked "1" is the first screen
  • the screen marked "2" is The screen
  • the first preview image and the second preview image are images acquired by the camera of the second screen (facing the second user).
  • the first preview image is displayed on the first screen
  • the second preview image is displayed on the second screen. Preview image.
  • the first preview image and the second preview image are the same, so that not only the first user can see the first preview image (the effect of the captured image can be inferred from the first preview image), and the second user can See the second preview image through the second screen, and adjust the posture of the second user according to the second preview image (to obtain a photographing effect that meets the requirements of the second user), and then according to the obtained first image and The second image is to obtain a target image that better meets the user's requirements.
  • the target object includes the first object and the second object, that is, the first object and the second object may be the same (the first object and the second object have different poses, and target images with different poses can be generated), It may also be different (target images including different objects can be generated); then, in conjunction with FIG. 3, as shown in FIG. 6, this step 203c can be specifically implemented by the following step 203c4.
  • Step 203c4 The terminal device splices the first image and the second image to generate the target image.
  • the specific image stitching technology reference may be made to the existing related technology, which will not be repeated here. It should be noted that if the first image and the second image include overlapping areas (same areas), then de-duplication processing can be performed during the stitching process (that is, the repeated areas are removed, and there are no repeated areas in the target image),
  • the specific deduplication technology can refer to the existing related technology, which will not be repeated here.
  • the first screen or the second screen of the terminal device may not be able to display all areas of the target image obtained by splicing at the same time.
  • the user can trigger the terminal device to display different areas of the target image obtained by splicing by sliding input, etc., so that the user can view the target image.
  • the user can obtain the first image and the second image of the camera effect required by the user when taking pictures, and obtain the target image of the display effect required by the user according to the first image and the second image.
  • the user can obtain The speed of the desired target image.
  • this step 203c4 may be specifically implemented by the following step 203c4a.
  • Step 203c4a The terminal device stitches the first side of the first image with the second side of the second image, and stitches the second side of the first image with the first side of the second image to generate the target image .
  • first side of the first image and the second side of the first image are two opposite sides in the first image
  • first side of the second image and the second side of the second image are the The two opposite sides in the second image
  • the terminal device stereoscopically stitches the first image and the second image to obtain a target image with multiple objects and multiple backgrounds.
  • the target image obtained by de-duplication processing can be used to show a wider scene and make the target image appear to have a certain three-dimensional effect. This effect cannot be achieved by related technologies through single-screen photography.
  • the background is a scene other than the subject that serves as a foil to the subject.
  • the scene in the first image other than the first object is the background
  • the second The scene in the image except the second object is the background.
  • the target object includes the first object and the second object, that is, the first object and the second object may be the same (the first object and the second object have different poses, and target images with different poses can be generated), It may also be different (target images including different objects can be generated); then in conjunction with FIG. 3, as shown in FIG. 7, this step 203c can be specifically implemented by the following step 203c5.
  • Step 203c5 The terminal device synthesizes the first target image and the first target object to generate the target image.
  • the first target image is the first image, and the first target object is the second object; or, the first target image is the second image, and the first target object is the first object.
  • the terminal device synthesizes the first object with the second image, specifically: selecting the background of the second image as the background of the target image, synthesizing the first object and the second object into the background; or the terminal device synthesizing the second object
  • the synthesizing with the first image is specifically: selecting the background of the first image as the background of the target image, and synthesizing the first object and the second object into the background.
  • the specific synthesis process can refer to the existing related technology, which will not be repeated here.
  • the first user takes a photo of the second user through the camera on the second screen
  • the second user takes a photo of the first user through the camera on the first screen
  • the first user takes a photo of the first user through the camera on the first screen.
  • the user takes a picture of himself
  • the second user takes a picture of the second user himself through the camera on the second screen.
  • the first object is different from the second object, and then the target image shown in FIG. 9 is obtained through synthesis.
  • the dual shooting mode is used to obtain the target image that meets the user’s needs, and it improves the ability to meet user needs. The speed of the target image.
  • this step 203c5 may be specifically implemented through the following steps 203c5a-203c5d.
  • Step 203c5a The terminal device synthesizes the first target image and the first target object to generate a target preview image.
  • the second input is used to trigger the terminal device to synthesize the target preview image.
  • Step 203c5b The terminal device displays the target preview image.
  • Step 203c5c The terminal device receives the user's fourth input on the first screen.
  • Step 203c5d In response to the fourth input, the terminal device uses the target preview image as the target image.
  • the terminal device Before obtaining the target image, the terminal device first displays the target preview image obtained by combining the first target image and the first target object.
  • the user can check the displayed target preview image to determine whether the effect of the target preview image meets the user's requirements. If the user requirements are met, the user triggers the terminal device to save the target image through the fourth input; if the user requirements are not met, the user can regain the target preview image by changing the postures of the first object and the second object. This can reduce the probability of the user reacquiring the target image, and can increase the speed at which the user can obtain the difficult target image.
  • the terminal device may receive a user's first input on the first screen; in response to the first input, display the first screen on the first screen.
  • a preview image, a second preview image is displayed on the second screen, the first preview image includes a first object, and the second preview image includes a second object; according to the first preview image and the second preview Image, acquiring a target image, the target image includes a target object, and the target object includes at least one of the first object and the second object.
  • the terminal device can acquire the target image according to the first preview image in the first screen and the second preview image in the second screen. That is, the terminal device uses the first screen and the second screen to take pictures. Compared with the single-screen photography method, the user can obtain a satisfactory target image more quickly, which can solve the problem that the camera function of the terminal device in related technologies is not flexible enough. .
  • an embodiment of the present disclosure provides a terminal device 120.
  • the terminal device 120 has a first screen and a second screen.
  • the terminal device 120 includes a receiving module 121, a display module 122, and an acquiring module 123;
  • the receiving module 121 is configured to receive a user's first input on the first screen; the display module 122 is configured to display a first preview on the first screen in response to the first input received by the receiving module 121 Image, a second preview image is displayed on the second screen, the first preview image includes a first object, and the second preview image includes a second object; the acquisition module 123 is configured to display according to the display module 122 Obtaining a target image of the first preview image and the second preview image, the target image includes a target object, and the target object includes at least one of the first object and the second object.
  • the obtaining module 123 is specifically configured to obtain a first image according to the first preview image, and obtain a second image according to the second preview image when the target condition is met; according to the first image And the second image to obtain the target image.
  • the target condition includes receiving a second input from the user.
  • the target condition includes: detecting that the posture information of the first object has not changed within a first preset time period, and detecting that the posture information of the second object has not changed within a second preset time period; Wherein, in the case where it is detected that the posture information of the first object has not changed within the first preset time period, the first image is acquired according to the first preview image, and when it is detected that the posture information of the first object is not changed during the second preset time period In a case where the posture information of the second object within does not change, a second image is acquired according to the second preview image.
  • the target object includes the first object or the second object; the acquisition module 123 is specifically configured to display the first image and the second image; and receive a third input from the user, where the third input is the user Select an input of a target image from the first image and the second image; in response to the third input, obtain the target image corresponding to the third input.
  • the target object includes the first object and the second object; the acquisition module 123 is specifically configured to stitch the first image and the second image to generate the target image.
  • the acquiring module 123 is specifically configured to stitch the first side of the first image with the second side of the second image, and the second side of the first image with the first side of the second image. Edge stitching to generate the target image; wherein, the first side of the first image and the second side of the first image are two opposite sides in the first image, and the first side of the second image and the first side The second side of the second image is the two opposite sides of the second image.
  • the target object includes the first object and the second object; the acquisition module 123 is specifically configured to synthesize the first target image and the first target object to generate the target image; wherein, the first target image Is the first image, and the first target object is the second object; or, the first target image is the second image, and the first target object is the first object.
  • the acquisition module 123 is specifically configured to synthesize the first target image and the first target object to generate a target preview image; display the target preview image; receive a fourth input from the user on the first screen; respond to The fourth input uses the target preview image as the target image.
  • the terminal device provided by the embodiment of the present disclosure can implement each process shown in any one of FIG. 2 to FIG. 9 in the foregoing method embodiment, and in order to avoid repetition, details are not described herein again.
  • the embodiment of the present disclosure provides a terminal device.
  • the terminal device may receive a user's first input on the first screen; in response to the first input, display the first screen on the first screen.
  • a preview image, a second preview image is displayed on the second screen, the first preview image includes a first object, and the second preview image includes a second object; according to the first preview image and the second preview Image, acquiring a target image, the target image includes a target object, and the target object includes at least one of the first object and the second object.
  • the terminal device can acquire the target image according to the first preview image in the first screen and the second preview image in the second screen. That is, the terminal device uses the first screen and the second screen to take pictures. Compared with the single-screen photography method, the user can obtain a satisfactory target image more quickly, which can solve the problem that the camera function of the terminal device in related technologies is not flexible enough. .
  • FIG. 11 is a schematic diagram of the hardware structure of a terminal device implementing various embodiments of the present disclosure.
  • the terminal device 100 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, and a memory 109 , The processor 110, and the power supply 111 and other components.
  • the structure of the terminal device shown in FIG. 11 does not constitute a limitation on the terminal device, and the terminal device may include more or fewer components than shown in the figure, or a combination of certain components, or different components Layout.
  • terminal devices include, but are not limited to, mobile phones, tablet computers, notebook computers, palmtop computers, vehicle-mounted terminal devices, wearable devices, and pedometers.
  • the user input unit 107 receives a user's first input on the first screen;
  • the display unit 106 is configured to display a first preview image on the first screen in response to the first input, and on the second screen A second preview image is displayed in the first preview image, the first preview image includes a first object, and the second preview image includes a second object;
  • the processor 110 is configured to acquire the first preview image and the second preview image A target image, the target image includes a target object, and the target object includes at least one of the first object and the second object.
  • the terminal device may receive the first input of the user on the first screen; in response to the first input, display the first preview on the first screen.
  • Image displaying a second preview image on the second screen, the first preview image includes a first object, and the second preview image includes a second object; according to the first preview image and the second preview image, A target image is acquired, the target image includes a target object, and the target object includes at least one of the first object and the second object.
  • the terminal device can acquire the target image according to the first preview image in the first screen and the second preview image in the second screen. That is, the terminal device uses the first screen and the second screen to take pictures. Compared with the single-screen photography method, the user can obtain a satisfactory target image more quickly, which can solve the problem that the camera function of the terminal device in related technologies is not flexible enough. .
  • the radio frequency unit 101 can be used for receiving and sending signals in the process of sending and receiving information or talking. Specifically, the downlink data from the base station is received and processed by the processor 110; in addition, Uplink data is sent to the base station.
  • the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 101 can also communicate with the network and other devices through a wireless communication system.
  • the terminal device provides users with wireless broadband Internet access through the network module 102, such as helping users to send and receive emails, browse web pages, and access streaming media.
  • the audio output unit 103 can convert the audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into audio signals and output them as sounds. Moreover, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (for example, call signal reception sound, message reception sound, etc.).
  • the audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 104 is used to receive audio or video signals.
  • the input unit 104 may include a graphics processing unit (GPU) 1041 and a microphone 1042.
  • the graphics processor 1041 is configured to monitor images of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode. Data is processed.
  • the processed image frame can be displayed on the display unit 106.
  • the image frame processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or sent via the radio frequency unit 101 or the network module 102.
  • the microphone 1042 can receive sound, and can process such sound into audio data.
  • the processed audio data can be converted into a format that can be sent to a mobile communication base station via the radio frequency unit 101 for output in the case of a telephone call mode.
  • the terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light.
  • the proximity sensor can close the display panel 1061 and the display panel 1061 when the terminal device 100 is moved to the ear. / Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when it is stationary, and can be used to identify the posture of the terminal device (such as horizontal and vertical screen switching, related games , Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc.; sensor 105 can also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, Infrared sensors, etc., will not be repeated here.
  • the display unit 106 is used to display information input by the user or information provided to the user.
  • the display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • the user input unit 107 may be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the terminal device.
  • the user input unit 107 includes a touch panel 1071 and other input devices 1072.
  • the touch panel 1071 also called a touch screen, can collect user touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 1071 or near the touch panel 1071. operating).
  • the touch panel 1071 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 110, the command sent by the processor 110 is received and executed.
  • the touch panel 1071 can be realized by various types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 107 may also include other input devices 1072.
  • other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick, which will not be repeated here.
  • the touch panel 1071 can be overlaid on the display panel 1061.
  • the touch panel 1071 detects a touch operation on or near it, it is transmitted to the processor 110 to determine the type of the touch event.
  • the type of event provides corresponding visual output on the display panel 1061.
  • the touch panel 1071 and the display panel 1061 are used as two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 can be integrated
  • the implementation of the input and output functions of the terminal device is not specifically limited here.
  • the interface unit 108 is an interface for connecting an external device with the terminal device 100.
  • the external device may include a wired or wireless headset port, an external power source (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) port, video I/O port, headphone port, etc.
  • the interface unit 108 can be used to receive input (for example, data information, power, etc.) from an external device and transmit the received input to one or more elements in the terminal device 100 or can be used to connect to the terminal device 100 and external Transfer data between devices.
  • the memory 109 can be used to store software programs and various data.
  • the memory 109 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data (such as audio data, phone book, etc.) created by the use of mobile phones.
  • the memory 109 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 110 is the control center of the terminal device. It uses various interfaces and lines to connect the various parts of the entire terminal device, runs or executes the software programs and/or modules stored in the memory 109, and calls data stored in the memory 109 , Perform various functions of the terminal equipment and process data, so as to monitor the terminal equipment as a whole.
  • the processor 110 may include one or more processing units; optionally, the processor 110 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, and application programs, etc.
  • the adjustment processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 110.
  • the terminal device 100 may also include a power source 111 (such as a battery) for supplying power to various components.
  • a power source 111 such as a battery
  • the power source 111 may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption through the power management system. Management and other functions.
  • the terminal device 100 includes some functional modules not shown, which will not be repeated here.
  • an embodiment of the present disclosure further provides a terminal device, which may include the aforementioned processor 110 shown in FIG. 11, a memory 109, and a computer program stored on the memory 109 and running on the processor 110,
  • a terminal device which may include the aforementioned processor 110 shown in FIG. 11, a memory 109, and a computer program stored on the memory 109 and running on the processor 110,
  • the computer program is executed by the processor 110, each process of the photographing method shown in any one of FIG. 2 to FIG. 9 in the foregoing method embodiment is realized, and the same technical effect can be achieved. To avoid repetition, details are not described herein again.
  • the embodiments of the present disclosure also provide a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer program When the computer program is executed by a processor, the computer program shown in any one of FIGS. 2 to 9 in the above method embodiment is implemented.
  • the computer-readable storage medium such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • the technical solution of the present disclosure essentially or the part that contributes to the related technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk). ) Includes several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present disclosure.
  • a terminal device which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Abstract

本发明实施例公开了一种拍照方法及终端设备。该方法包括:接收用户在该第一屏幕上的第一输入;响应于该第一输入,在该第一屏幕中显示第一预览图像,在该第二屏幕中显示第二预览图像,该第一预览图像中包括第一对象,该第二预览图像中包括第二对像;根据该第一预览图像和该第二预览图像,获取目标图像,该目标图像中包括目标对象,该目标对象包括该第一对象和该第二对象中的至少一个。

Description

拍照方法及终端设备
本申请要求于2019年02月22日提交国家知识产权局、申请号为201910133858.8、申请名称为“一种拍照方法及终端设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明实施例涉及终端技术领域,尤其涉及一种拍照方法及终端设备。
背景技术
随着终端技术的不断发展,终端设备的拍照功能越来越强大。
目前,现有的拍照功能存在以下问题:在拍照者给被拍照者拍摄照片时,可能需要拍摄多次,才能得到令被拍摄者满意的照片;或者,当用户需要拍摄的对象较多时,可能有的对象没有在预览图像中而导致无法拍摄到该对象。
因此,相关技术中终端设备的拍照功能不够灵活。
发明内容
本公开实施例提供一种拍照方法及终端设备,以解决相关技术中终端设备的拍照功能还不够灵活的问题。
为了解决上述技术问题,本公开是这样实现的:
第一方面,本公开实施例提供了一种拍照方法,应用于具有第一屏幕和第二屏幕的终端设备,该方法包括:
接收用户在该第一屏幕上的第一输入;响应于该第一输入,在该第一屏幕中显示第一预览图像,在该第二屏幕中显示第二预览图像,该第一预览图像中包括第一对象,该第二预览图像中包括第二对像;根据该第一预览图像和该第二预览图像,获取目标图像,该目标图像中包括目标对象,该目标对象包括该第一对象和该第二对象中的至少一个。
第二方面,本公开实施例提供了一种终端设备,该终端设备具有第一屏幕和第二屏幕,该终端设备包括:接收模块、显示模块和获取模块;
该接收模块,用于接收用户在该第一屏幕上的第一输入;该显示模块,用于响应于该接收模块接收的该第一输入,在该第一屏幕中显示第一预览图像,在该第二屏幕中显示第二预览图像,该第一预览图像中包括第一对象,该第二预览图像中包括第二对像;该获取模块,用于根据该显示模块显示的该第一预览图像和该第二预览图像,获取目标图像,该目标图像中包括目标对象,该目标对象包括该第一对象和该第二对象中的至少一个。
第三方面,本公开实施例提供了一种终端设备,包括处理器、存储器及存储在该存储器上并可在该处理器上运行的计算机程序,该计算机程序被该处理器执行时实现如第一方面中的拍照方法的步骤。
第四方面,本公开实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储计算机程序,该计算机程序被处理器执行时实现如第一方面中的拍照方法的 步骤。
在本公开实施例中,终端设备可以通过接收用户在该第一屏幕上的第一输入;响应于该第一输入,在该第一屏幕中显示第一预览图像,在该第二屏幕中显示第二预览图像,该第一预览图像中包括第一对象,该第二预览图像中包括第二对像;根据该第一预览图像和该第二预览图像,获取目标图像,该目标图像中包括目标对象,该目标对象包括该第一对象和该第二对象中的至少一个。该方案中,终端设备可以根据第一屏幕中的第一预览图像和第二屏幕中的第二预览图像,获取目标图像。即终端设备使用第一屏幕和第二屏幕拍照,相比于单屏拍照的方法,可以更快速地使用户获得满意的目标图像,从而可以解决相关技术中终端设备的拍照功能还不够灵活的问题。
附图说明
图1为本公开实施例提供的一种可能的安卓操作系统的架构示意图;
图2为本公开实施例提供的拍照方法的流程图之一;
图3为本公开实施例提供的拍照方法的流程图之二;
图4为本公开实施例提供的拍照方法的流程图之三;
图5为本公开实施例提供的拍照方法的界面的示意图之一;
图6为本公开实施例提供的拍照方法的流程图之四;
图7为本公开实施例提供的拍照方法的流程图之五;
图8为本公开实施例提供的拍照方法的界面的示意图之二;
图9为本公开实施例提供的拍照方法的界面的示意图之三;
图10为本公开实施例提供的终端设备的结构示意图;
图11为本公开实施例提供的终端设备的硬件示意图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
本公开的说明书和权利要求书中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一输入、第二输入、第三输入和第四输入等是用于区别不同的输入,而不是用于描述输入的特定顺序。
在本公开实施例中,“示例性地”或者“例如”等词用于表示作例子、例证或说明。本公开实施例中被描述为“示例性地”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性地”或者“例如”等词旨在以具体方式呈现相关概念。
在本公开实施例的描述中,除非另有说明,“多个”的含义是指两个或者两个以上,例如,多个处理单元是指两个或者两个以上的处理单元;多个元件是指两个或者两个以上的元件等。
本公开实施例提供一种拍照方法,在本公开实施例中,终端设备可以通过接收用户在该第一屏幕上的第一输入;响应于该第一输入,在该第一屏幕中显示第一预览图像,在该第二屏幕中显示第二预览图像,该第一预览图像中包括第一对象,该第二预 览图像中包括第二对像;根据该第一预览图像和该第二预览图像,获取目标图像,该目标图像中包括目标对象,该目标对象包括该第一对象和该第二对象中的至少一个。该方案中,终端设备可以根据第一屏幕中的第一预览图像和第二屏幕中的第二预览图像,获取目标图像。即终端设备使用第一屏幕和第二屏幕拍照,相比于单屏拍照的方法,可以更快速地使用户获得满意的目标图像,从而可以解决相关技术中终端设备的拍照功能还不够灵活的问题。
下面以安卓操作系统为例,介绍一下本公开实施例提供的拍照方法所应用的软件环境。
如图1所示,为本公开实施例提供的一种可能的安卓操作系统的架构示意图。在图1中,安卓操作系统的架构包括4层,分别为:应用程序层、应用程序框架层、系统运行库层和内核层(具体可以为Linux内核层)。
其中,应用程序层包括安卓操作系统中的各个应用程序(包括系统应用程序和第三方应用程序)。
应用程序框架层是应用程序的框架,开发人员可以在遵守应用程序的框架的开发原则的情况下,基于应用程序框架层开发一些应用程序。
系统运行库层包括库(也称为系统库)和安卓操作系统运行环境。库主要为安卓操作系统提供其所需的各类资源。安卓操作系统运行环境用于为安卓操作系统提供软件环境。
内核层是安卓操作系统的操作系统层,属于安卓操作系统软件层次的最底层。内核层基于Linux内核为安卓操作系统提供核心系统服务和与硬件相关的驱动程序。
以安卓操作系统为例,本公开实施例中,开发人员可以基于上述如图1所示的安卓操作系统的系统架构,开发实现本公开实施例提供的拍照方法的软件程序,从而使得该拍照方法可以基于如图1所示的安卓操作系统运行。即处理器或者终端可以通过在安卓操作系统中运行该软件程序实现本公开实施例提供的拍照方法。
本公开实施例中的终端设备可以为移动终端设备,也可以为非移动终端设备。移动终端设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等;非移动终端设备可以为个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等;本公开实施例不作具体限定。
需要说明的是:在本公开实施例中,该终端设备可以是多面屏终端设备,如双面屏终端设备、折叠屏终端设备等,本公开实施例不作限定。
本公开实施例提供的拍照方法的执行主体可以为上述的终端设备(包括移动终端设备和非移动终端设备),也可以为该终端设备中能够实现该方法的功能模块和/或功能实体,具体地可以根据实际使用需求确定,本公开实施例不作限定。下面以终端设备为例,对本公开实施例提供的拍照方法进行示例性地说明。
参考图2所示,本公开实施例提供了一种拍照方法,应用于具有第一屏幕和第二屏幕的终端设备,该方法可以包括下述的步骤201-步骤203。
步骤201、终端设备接收用户在该第一屏幕上的第一输入。
第一输入可以为用户子在第一屏幕上的点击输入,也可以为用户在第一屏幕上的滑动输入,还可以是其他的可行性输入,本公开实施例不作限定。
示例性地,上述点击输入可以是任意次数的点击操作,例如可以是单击操作、可以是双击操作等;可以是任意时长的点击操作,例如可以是操作时长小于或等于预设时长的短按操作,可以是操作时长大于或等于预设时长的长按操作等。上述滑动操作可以是向任意方向的滑动操作,例如向左的滑动操作、向右的滑动操作、向上的滑动操作或向下的滑动操作等。
步骤202、响应于该第一输入,终端设备在该第一屏幕中显示第一预览图像,在该第二屏幕中显示第二预览图像。
该第一预览图像中包括第一对象,该第二预览图像中包括第二对像。
第一对象和第二对象可以相同,也可以不相同,本公开实施例不作限定。第一像和第二对象是用户需要拍摄的对象。第一像和第二对象可以是下述的至少一项:人、动物和物体。
第一预览图像和第二预览图像是泛指预览图像,即第一预览图像可以是随着第一对象的移动或者终端设备的移动而变化的图像,第二预览图像可以是随着第二对象的移动或者终端设备的移动而变化的图像。
第一预览图像和第二预览图像可以相同,也可以不相同,本公开实施例不作限定。示例性地,当第一对象和第二对象相同时,在同一时刻内的第一预览图像和第二预览图像是相同的,在不同时刻的第一预览图像和第二预览图像可能是相同的,也可能是不相同的;当第一对象和第二对象不同时,第一预览图像和第二预览图像是不同的。
示例性地,第一输入为用户在第一屏幕中点击“双屏拍照模式”的操作,终端设备接收用户的第一输入,响应于该第一输入,终端设备在该第一屏幕中显示第一预览图像,在该第二屏幕中显示第二预览图像。
步骤203、终端设备根据该第一预览图像和该第二预览图像,获取目标图像。
该目标图像中包括目标对象,该目标对象包括该第一对象和该第二对象中的至少一个。
示例性地,终端设备根据该第一预览图像和该第二预览图像拍照获得目标图像,目标图像是用户需要拍摄的图像。
需要说明的是:在拍照过程中,用户还可以设置各种拍照效果,例如人像效果、添加表情、挂件等效果,本公开实施例不作限定。
可选地,在满足目标条件的情况下,执行步骤203,目标条件包括第一目标条件和第二目标条件。则结合图2,如图3所示,该步骤203具体地可以通过下述的步骤203a-步骤203c实现。
步骤203a、终端设备根据该第一预览图像,获取第一图像。
在满足第一目标条件的情况下,终端设备根据该第一预览图像,获取第一图像。具体根据第一图像获取第一图像的方法可以参考现有相关技术,此处不予赘述。
步骤203b、终端设备根据该第二预览图像,获取第二图像。
在满足第二目标条件的情况下,终端设备根据该第一预览图像,获取第一图像。具体根据第二图像获取第二图像的方法可以参考现有相关技术,此处不予赘述。
步骤203c、终端设备根据该第一图像和该第二图像,获取目标图像。
需要说明的是:在本公开实施例中,上述步骤203a和步骤203b之间没有先后顺序,即可以先执行步骤203a,再执行步骤203b;也可以先执行步骤203b,再执行步骤203a;也可以同时执行步骤203a和步骤203b;本公开实施例不作限定。
第二目标条件和第一目标条件可以相同,也可以不相同。
可选地,该目标条件包括接收到用户的第二输入。
第二输入可以是语音输入、触屏输入、按键输入等,本公开实施例不作限定。
示例性地,若第二输入为用户在第一屏幕上的输入,则第一目标条件和第二目标条件均为接收到用户的第二输入。具体地,上述步骤203a-步骤203c为:在终端设备接收到用户的第二输入的情况下,终端设备根据该第一预览图像,获取第一图像,并根据该第二预览图像,获取第二图像,然后根据该第一图像和该第二图像,获取目标图像。
示例性地,若第二输入包括第一子输入和第二子输入,第一子输入为用户在第一屏幕上的输入,第二子输入为用户在第二屏幕上的输入,则第一目标条件为接收到用户的第一子输入,第二目标条件均为接收到用户的第二子输入。具体地,上述步骤203a-步骤203c为:在终端设备接收到用户的第一子输入的情况下,终端设备根据该第一预览图像,获取第一图像;在终端设备接收到用户的第二子输入的情况下,根据该第二预览图像,获取第二图像;然后根据该第一图像和该第二图像,获取目标图像。
这样终端设备在用户的第二输入的触发下,获取第一图像、第二图像,并根据第一图像和第二图像,获取目标图像,可以根据用户的需求获取目标图像,从而得到的目标图像更能满足用户需求,可以提高用户体验。
可选地,该目标条件包括:检测到在第一预设时长内该第一对象的姿态信息未发生变化,以及检测到在第二预设时长内该第二对象的姿态信息未发生变化;其中,在检测到在该第一预设时长内该第一对象的姿态信息未发生变化的情况下,根据该第一预览图像,获取第一图像,以及在检测到在该第二预设时长内该第二对象的姿态信息未发生变化的情况下,根据该第二预览图像,获取第二图像。
第一预设时长和第二预设时长可以是提前预设好的,第一预设时长和第二预设时长可以相同,也可以不同,本公开实施例不作限定。
第一对象的姿态信息至少包括第一对象的各个部位的位置信息和形态信息(包括姿势、动作、表情等),还可以包括其他信息,本公开实施例不作限定。第一对象的姿态信息未发生变化具体指第一对象保持静止状态。
第二对象的姿态信息至少包括第二对象的各个部位的位置信息和形态信息,还可以包括其他信息,本公开实施例不作限定。第二对象的姿态信息未发生变化具体指第二对象保持静止状态。
这时第一目标条件和第二目标条件不相同。第一目标条件为检测到在第一预设时长内该第一对象的姿态信息未发生变化,第二目标条件为检测到在第二预设时长内该第二对象的姿态信息未发生变化。
这样该目标条件包括:检测到在第一预设时长内该第一对象的姿态信息未发生变化,以及检测到在第二预设时长内该第二对象的姿态信息未发生变化的情况下,可以 无需用户操作,获得第一图像和第二图像,并根据第一图像和第二图像,获取目标图像,可以在用户不方便操作(用户操作终端设备可能会导致姿态的变化,从而可能使得拍照效果不佳)的情况下,得到满足用户需求的目标图像,可以提高用户体验。
可选地,第一目标条件可以为接收到用户的第一子输入,第二目标条件可以为检测到在第二预设时长内该第二对象的姿态信息未发生变化。可选地,第一目标条件可以为检测到在第一预设时长内该第一对象的姿态信息未发生变化,第二目标条件可以为接收到用户的第二子输入。具体地描述可以参考上述相关描述,本公开实施例不作限定。
示例性地,终端设备在接收到用户的第一子输入的情况下,根据所述第一预览图像,获取第一图像,终端设备在检测到在第二预设时长内该第二对象的姿态信息未发生变化的情况下,根据所述第二预览图像,获取第二图像。
这样用户可以根据需求设定第一目标条件和第二目标条件,以获得比较满意的拍照效果,可以提高用户体验。
可选地,该目标对象包括该第一对象或该第二对象,即第一对象和第二对象相同,但根据第一预览图像获得的第一图像和根据第二预览图像获得的第二图像可能相同(第一图像中第一对象的姿态与的图像中第二对象的姿态相同),也可能不相同(第一图像中第一对象的姿态与的图像中第二对象的姿态不相同);则结合图3,如图4所示,该步骤203c具体地可以通过下述的步骤203c1-步骤203c3实现。
步骤203c1、终端设备显示该第一图像和该第二图像。
终端设备获取第一图像之后,即可以直接显示第一图像,也可以是通过用户的输入的触发显示第一图像。终端设备获取第二图像之后,即可以直接显示第二图像,也可以是通过用户的输入的触发显示第二图像。具体根据实际使用需求确定,本公开实施例不作限定。
步骤203c2、终端设备接收用户的第三输入。
该第三输入为用户从该第一图像和该第二图像中选择目标图像的输入。
对第三输入的描述可以参考上述对第一输入的相关描述,此处不予赘述。
用户可以根据自身喜好,从该第一图像和该第二图像中选择拍照效果用户比较满意的图像,作为目标图像,可以提高用户体验。
步骤203c3、响应于该第三输入,终端设备获取与该第三输入对应的该目标图像。
根据上述步骤,当第一用户使用终端设备的“双面屏拍照模式”为第二用户拍照时,如图5所示,标记为“1”的屏为第一屏幕,标记为“2”的屏为第二屏幕,第一预览图像和第二预览图像是根据第二屏幕(面对第二用户)的摄像头获取的图像,第一屏幕中显示第一预览图像,第二屏幕中显示第二预览图像,在相同时时刻,第一预览图像和第二预览图像相同,这样不仅第一用户可以看到第一预览图像(根据第一预览图像推知拍摄的图像的效果),而且第二用户可以通过第二屏幕看到第二预览图像,并根据看到的第二预览图像调整第二用户自身的姿态(以获得满足第二用户要求的拍照效果),进而还可以根据获得的第一图像和第二图像,获得更满足用户要求的目标图像。
可选地,该目标对象包括该第一对象和该第二对象,即第一对象和第二对象了可 以相同(第一对象和第二对象姿态不同,可生成具有不同姿态的目标图像),也可不相同(可生成包括不同对象的目标图像);则结合图3,如图6所示,该步骤203c具体地可以通过下述的步骤203c4实现。
步骤203c4、终端设备将该第一图像和该第二图像拼接,生成该目标图像。
具体的拼接图像的技术可以参考现有相关技术,此处不予赘述。需要说明的是,若第一图像和第二图像中包括重叠区域(相同的区域),则可以在拼接的过程中进行去重处理(即去掉重复的区域,目标图像中没有重复的区域),具体的去重技术可以参考现有相关技术,此处不予赘述。
终端设备的第一屏幕或第二屏幕可能无法同时显示通过拼接得到的目标图像的全部区域,用户可以通过滑动输入等触发终端设备显示拼接得到的目标图像的不同区域,以便于用户查看目标图像。
这样用户可以在拍照时得到用户需要的拍照效果的第一图像和第二图像,并据第一图像和第二图像得到用户需要的显示效果的目标图像,相比于相关技术,可提高用户获得需要的目标图像的速度。
可选地,该步骤203c4具体地可以通过下述的步骤203c4a实现。
步骤203c4a、终端设备将该第一图像的第一边与该第二图像的第二边拼接,以及将该第一图像的第二边与该第二图像的第一边拼接,生成该目标图像。
其中,该第一图像的第一边与该第一图像的第二边是该第一图像中相对的两个边,该第二图像的第一边与该第二图像的第二边是该第二图像中相对的两个边。
即终端设备将第一图像和第二图像首尾立体拼接,获得一个具有多个对象和多个背景的目标图像,通过将第一图像和第二图像拼接(当第一图像和第二图像有重叠区域时,可以采用去重处理)得到的目标图像,展示了更广阔的场景,使目标图像看起来有一定的立体感,这种效果是相关技术通过单屏拍照所不能达到的。
需要说明的是:背景为除拍摄主体之外的、对拍摄主体起衬托作用的景物,示例性地,在本公开实施例中,第一图像中除第一对象以外的景物为背景,第二图像中除第二对像以外的景物为背景。
可选地,该目标对象包括该第一对象和该第二对象,即第一对象和第二对象了可以相同(第一对象和第二对象姿态不同,可生成具有不同姿态的目标图像),也可不相同(可生成包括不同对象的目标图像);则结合图3,如图7所示,该步骤203c具体地可以通过下述的步骤203c5实现。
步骤203c5、终端设备将第一目标图像和第一目标对象合成,生成该目标图像。
其中,该第一目标图像为该第一图像,该第一目标对象为该第二对象;或者,该第一目标图像为该第二图像,该第一目标对象为该第一对象。
即终端设备将第一对象与第二图像合成,具体的为:选择第二图像的背景为目标图像的背景,将第一对象和第二对象合成到该背景中;或者终端设备将第二对象与第一图像合成,具体的为:选择第一图像的背景为目标图像的背景,将第一对象和第二对象合成到该背景中。具体的合成过程可以参考现有相关技术,此处不予赘述。
示例性地,第一用户通过第二屏幕上的摄像头给第二用户拍照,第二用户通过第一屏幕上的摄像头给第一用户拍照,或者第一用户通过第一屏幕上的摄像头给第一用 户自身拍照,第二用户通过第二屏幕上的摄像头给第二用户自身拍照,如图8所示,第一对象和第二对象不同,然后通过合成得到如图9所示的目标图像。
这样可以在拍摄对象比较庞大(例如,人数较多),使用单屏拍照在场景远近合适时,无法将所有对象容纳到拍摄预览图像中(当所有对象都容纳到拍摄预览图像中时,对象又太远太小,无法看清),或者到用户需要的到某种单屏拍照无法达到的效果的情况下,通过双拼拍摄模式获得满足用户需求的目标图像,而且提高了获得满足用户需求的目标图像的速度。
可选地,该步骤203c5具体地可以通过下述的步骤203c5a-203c5d实现。
步骤203c5a、终端设备将第一目标图像和第一目标对象合成,生成目标预览图像。
第二输入用于触发终端设备合成目标预览图像。
步骤203c5b、终端设备显示该目标预览图像。
步骤203c5c、终端设备接收用户在该第一屏幕上的第四输入。
步骤203c5d、响应于该第四输入,终端设备将该目标预览图像作为该目标图像。
终端设备在获得目标图像之前,先显示通过将第一目标图像和第一目标对象合成得到的目标预览图像,用户可以通过查看显示的目标预览图像,确定目标预览图像的效果是否符合用户要求。若符合用户要求,用户通过第四输入触发终端设备保存该目标图像;若不符合用户要求,用户可以通过改变第一对象和第二对象的姿态,重新获得目标预览图像。这样可以降低用户重新获取目标图像的概率,可以提高用户获得难以目标图像的速度。
本公开实施例中的各个附图均是结合独权实施例附图示例的,具体实现时,各个附图还可以结合其它任意可以结合的附图实现,本公开实施例不作限定。
本公开实施例提供了一种拍照方法,在本公开实施例中,终端设备可以通过接收用户在该第一屏幕上的第一输入;响应于该第一输入,在该第一屏幕中显示第一预览图像,在该第二屏幕中显示第二预览图像,该第一预览图像中包括第一对象,该第二预览图像中包括第二对像;根据该第一预览图像和该第二预览图像,获取目标图像,该目标图像中包括目标对象,该目标对象包括该第一对象和该第二对象中的至少一个。该方案中,终端设备可以根据第一屏幕中的第一预览图像和第二屏幕中的第二预览图像,获取目标图像。即终端设备使用第一屏幕和第二屏幕拍照,相比于单屏拍照的方法,可以更快速地使用户获得满意的目标图像,从而可以解决相关技术中终端设备的拍照功能还不够灵活的问题。
如图10所示,本公开实施例提供一种终端设备120,该终端设备120具有第一屏幕和第二屏幕,该终端设备120包括:接收模块121、显示模块122和获取模块123;
该接收模块121,用于接收用户在该第一屏幕上的第一输入;该显示模块122,用于响应于该接收模块121接收的该第一输入,在该第一屏幕中显示第一预览图像,在该第二屏幕中显示第二预览图像,该第一预览图像中包括第一对象,该第二预览图像中包括第二对像;该获取模块123,用于根据该显示模块122显示的该第一预览图像和该第二预览图像,获取目标图像,该目标图像中包括目标对象,该目标对象包括该第一对象和该第二对象中的至少一个。
可选地,该获取模块123,具体用于在满足目标条件的情况下,根据该第一预览 图像,获取第一图像,以及根据该第二预览图像,获取第二图像;根据该第一图像和该第二图像,获取目标图像。
可选地,该目标条件包括接收到用户的第二输入。
可选地,该目标条件包括:检测到在第一预设时长内该第一对象的姿态信息未发生变化,以及检测到在第二预设时长内该第二对象的姿态信息未发生变化;其中,在检测到在该第一预设时长内该第一对象的姿态信息未发生变化的情况下,根据该第一预览图像,获取第一图像,以及在检测到在该第二预设时长内该第二对象的姿态信息未发生变化的情况下,根据该第二预览图像,获取第二图像。
可选地,该目标对象包括该第一对象或该第二对象;该获取模块123,具体用于显示该第一图像和该第二图像;接收用户的第三输入,该第三输入为用户从该第一图像和该第二图像中选择目标图像的输入;响应于该第三输入,获取与该第三输入对应的该目标图像。
可选地,该目标对象包括该第一对象和该第二对象;该获取模块123,具体用于将该第一图像和该第二图像拼接,生成该目标图像。
可选地,该获取模块123,具体用于将该第一图像的第一边与该第二图像的第二边拼接,以及将该第一图像的第二边与该第二图像的第一边拼接,生成该目标图像;其中,该第一图像的第一边与该第一图像的第二边是该第一图像中相对的两个边,该第二图像的第一边与该第二图像的第二边是该第二图像中相对的两个边。
可选地,该目标对象包括该第一对象和该第二对象;该获取模块123,具体用于将第一目标图像和第一目标对象合成,生成该目标图像;其中,该第一目标图像为该第一图像,该第一目标对象为该第二对象;或者,该第一目标图像为该第二图像,该第一目标对象为该第一对象。
可选地,该获取模块123,具体用于将第一目标图像和第一目标对象合成,生成目标预览图像;显示该目标预览图像;接收用户在该第一屏幕上的第四输入;响应于该第四输入,将该目标预览图像作为该目标图像。
本公开实施例提供的终端设备能够实现上述方法实施例中图2至图9任意之一所示的各个过程,为避免重复,此处不再赘述。
本公开实施例提供了一种终端设备,在本公开实施例中,终端设备可以通过接收用户在该第一屏幕上的第一输入;响应于该第一输入,在该第一屏幕中显示第一预览图像,在该第二屏幕中显示第二预览图像,该第一预览图像中包括第一对象,该第二预览图像中包括第二对像;根据该第一预览图像和该第二预览图像,获取目标图像,该目标图像中包括目标对象,该目标对象包括该第一对象和该第二对象中的至少一个。该方案中,终端设备可以根据第一屏幕中的第一预览图像和第二屏幕中的第二预览图像,获取目标图像。即终端设备使用第一屏幕和第二屏幕拍照,相比于单屏拍照的方法,可以更快速地使用户获得满意的目标图像,从而可以解决相关技术中终端设备的拍照功能还不够灵活的问题。
图11为实现本公开各个实施例的一种终端设备的硬件结构示意图。如图11所示,该终端设备100包括但不限于:射频单元101、网络模块102、音频输出单元103、输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器 109、处理器110、以及电源111等部件。本领域技术人员可以理解,图11中示出的终端设备结构并不构成对终端设备的限定,终端设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本公开实施例中,终端设备包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端设备、可穿戴设备、以及计步器等。
其中,用户输入单元107,接收用户在该第一屏幕上的第一输入;显示单元106,用于响应于该第一输入,在该第一屏幕中显示第一预览图像,在该第二屏幕中显示第二预览图像,该第一预览图像中包括第一对象,该第二预览图像中包括第二对像;处理器110,用于根据该第一预览图像和该第二预览图像,获取目标图像,该目标图像中包括目标对象,该目标对象包括该第一对象和该第二对象中的至少一个。
本公开实施例提供的终端设备,在本公开实施例中,终端设备可以通过接收用户在该第一屏幕上的第一输入;响应于该第一输入,在该第一屏幕中显示第一预览图像,在该第二屏幕中显示第二预览图像,该第一预览图像中包括第一对象,该第二预览图像中包括第二对像;根据该第一预览图像和该第二预览图像,获取目标图像,该目标图像中包括目标对象,该目标对象包括该第一对象和该第二对象中的至少一个。该方案中,终端设备可以根据第一屏幕中的第一预览图像和第二屏幕中的第二预览图像,获取目标图像。即终端设备使用第一屏幕和第二屏幕拍照,相比于单屏拍照的方法,可以更快速地使用户获得满意的目标图像,从而可以解决相关技术中终端设备的拍照功能还不够灵活的问题。
应理解的是,本公开实施例中,射频单元101可用于收发信息或通话过程中,信号的接收和发送,具体地,将来自基站的下行数据接收后,给处理器110处理;另外,将上行的数据发送给基站。通常,射频单元101包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元101还可以通过无线通信系统与网络和其他设备通信。
终端设备通过网络模块102为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元103可以将射频单元101或网络模块102接收的或者在存储器109中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元103还可以提供与终端设备100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元103包括扬声器、蜂鸣器以及受话器等。
输入单元104用于接收音频或视频信号。输入单元104可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元106上。经图形处理器1041处理后的图像帧可以存储在存储器109(或其它存储介质)中或者经由射频单元101或网络模块102进行发送。麦克风1042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元101发送到移动通信基站的格式输出。
终端设备100还包括至少一种传感器105,比如光传感器、运动传感器以及其他 传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板1061的亮度,接近传感器可在终端设备100移动到耳边时,关闭显示面板1061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别终端设备姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器105还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元106用于显示由用户输入的信息或提供给用户的信息。显示单元106可包括显示面板1061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1061。
用户输入单元107可用于接收输入的数字或字符信息,以及产生与终端设备的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元107包括触控面板1071以及其他输入设备1072。触控面板1071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1071上或在触控面板1071附近的操作)。触控面板1071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器110,接收处理器110发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1071。除了触控面板1071,用户输入单元107还可以包括其他输入设备1072。具体地,其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
进一步的,触控面板1071可覆盖在显示面板1061上,当触控面板1071检测到在其上或附近的触摸操作后,传送给处理器110以确定触摸事件的类型,随后处理器110根据触摸事件的类型在显示面板1061上提供相应的视觉输出。虽然在图11中,触控面板1071与显示面板1061是作为两个独立的部件来实现终端设备的输入和输出功能,但是在某些实施例中,可以将触控面板1071与显示面板1061集成而实现终端设备的输入和输出功能,具体此处不做限定。
接口单元108为外部装置与终端设备100连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元108可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到终端设备100内的一个或多个元件或者可以用于在终端设备100和外部装置之间传输数据。
存储器109可用于存储软件程序以及各种数据。存储器109可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器109可以包括高速随机存取 存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器110是终端设备的控制中心,利用各种接口和线路连接整个终端设备的各个部分,通过运行或执行存储在存储器109内的软件程序和/或模块,以及调用存储在存储器109内的数据,执行终端设备的各种功能和处理数据,从而对终端设备进行整体监控。处理器110可包括一个或多个处理单元;可选地,处理器110可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。
终端设备100还可以包括给各个部件供电的电源111(比如电池),可选地,电源111可以通过电源管理系统与处理器110逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,终端设备100包括一些未示出的功能模块,在此不再赘述。
可选地,本公开实施例还提供一种终端设备,可以包括上述如图11所示的处理器110,存储器109,以及存储在存储器109上并可在该处理器110上运行的计算机程序,该计算机程序被处理器110执行时实现上述方法实施例中图2至图9任意之一所示的拍照方法的各个过程,且能达到相同的技术效果,为避免重复,此处不再赘述。
本公开实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述方法实施例中图2至图9任意之一所示的拍照方法的各个过程,且能达到相同的技术效果,为避免重复,此处不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本公开各个实施例所述的方法。
上面结合附图对本公开的实施例进行了描述,但是本公开并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本公开的启示下,在不脱离本公开宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本公开的保护之内。

Claims (20)

  1. 一种拍照方法,应用于具有第一屏幕和第二屏幕的终端设备,所述方法包括:
    接收用户在所述第一屏幕上的第一输入;
    响应于所述第一输入,在所述第一屏幕中显示第一预览图像,在所述第二屏幕中显示第二预览图像,所述第一预览图像中包括第一对象,所述第二预览图像中包括第二对像;
    根据所述第一预览图像和所述第二预览图像,获取目标图像,所述目标图像中包括目标对象,所述目标对象包括所述第一对象和所述第二对象中的至少一个。
  2. 根据权利要求1所述的方法,其中,所述根据所述第一预览图像和所述第二预览图像,获取目标图像,包括:
    在满足目标条件的情况下,根据所述第一预览图像,获取第一图像,以及根据所述第二预览图像,获取第二图像;
    根据所述第一图像和所述第二图像,获取目标图像。
  3. 根据权利要求2所述的方法,其中,所述目标条件包括接收到用户的第二输入。
  4. 根据权利要求2所述的方法,其中,所述目标条件包括:检测到在第一预设时长内所述第一对象的姿态信息未发生变化,以及检测到在第二预设时长内所述第二对象的姿态信息未发生变化;
    其中,在检测到在所述第一预设时长内所述第一对象的姿态信息未发生变化的情况下,根据所述第一预览图像,获取第一图像,以及在检测到在所述第二预设时长内所述第二对象的姿态信息未发生变化的情况下,根据所述第二预览图像,获取第二图像。
  5. 根据权利要求2至4中任一项所述的方法,其中,所述目标对象包括所述第一对象或所述第二对象;
    所述根据所述第一图像和所述第二图像,获取目标图像,包括:
    显示所述第一图像和所述第二图像;
    接收用户的第三输入,所述第三输入为用户从所述第一图像和所述第二图像中选择目标图像的输入;
    响应于所述第三输入,获取与所述第三输入对应的所述目标图像。
  6. 根据权利要求2至4中任一项所述的方法,其中,所述目标对象包括所述第一对象和所述第二对象;
    所述根据所述第一图像和所述第二图像,获取目标图像,包括:
    将所述第一图像和所述第二图像拼接,生成所述目标图像。
  7. 根据权利要求6所述的方法,其中,所述将所述第一图像和所述第二图像拼接,生成所述目标图像,包括:
    将所述第一图像的第一边与所述第二图像的第二边拼接,以及将所述第一图像的第二边与所述第二图像的第一边拼接,生成所述目标图像;
    其中,所述第一图像的第一边与所述第一图像的第二边是所述第一图像中相对的两个边,所述第二图像的第一边与所述第二图像的第二边是所述第二图像中相对的两个边。
  8. 根据权利要求2至4中任一项所述的方法,其中,所述目标对象包括所述第一对象和所述第二对象;
    所述根据所述第一图像和所述第二图像,获取目标图像,包括:
    将第一目标图像和第一目标对象合成,生成所述目标图像;
    其中,所述第一目标图像为所述第一图像,所述第一目标对象为所述第二对象;或者,所述第一目标图像为所述第二图像,所述第一目标对象为所述第一对象。
  9. 根据权利要求8所述的方法,其中,所述将第一目标图像和第一目标对象合成,生成所述目标图像,包括:
    将第一目标图像和第一目标对象合成,生成目标预览图像;
    显示所述目标预览图像;
    接收用户在所述第一屏幕上的第四输入;
    响应于所述第四输入,将所述目标预览图像作为所述目标图像。
  10. 一种终端设备,所述终端设备具有第一屏幕和第二屏幕,所述终端设备包括:接收模块、显示模块和获取模块;
    所述接收模块,用于接收用户在所述第一屏幕上的第一输入;
    所述显示模块,用于响应于所述接收模块接收的所述第一输入,在所述第一屏幕中显示第一预览图像,在所述第二屏幕中显示第二预览图像,所述第一预览图像中包括第一对象,所述第二预览图像中包括第二对像;
    所述获取模块,用于根据所述显示模块显示的所述第一预览图像和所述第二预览图像,获取目标图像,所述目标图像中包括目标对象,所述目标对象包括所述第一对象和所述第二对象中的至少一个。
  11. 根据权利要求10所述的终端设备,其中,所述获取模块,具体用于在满足目标条件的情况下,根据所述第一预览图像,获取第一图像,以及根据所述第二预览图像,获取第二图像;根据所述第一图像和所述第二图像,获取目标图像。
  12. 根据权利要求11所述的终端设备,其中,所述目标条件包括接收到用户的第二输入。
  13. 根据权利要求11所述的终端设备,其中,所述目标条件包括:检测到在第一预设时长内所述第一对象的姿态信息未发生变化,以及检测到在第二预设时长内所述第二对象的姿态信息未发生变化;
    其中,在检测到在所述第一预设时长内所述第一对象的姿态信息未发生变化的情况下,根据所述第一预览图像,获取第一图像,以及在检测到在所述第二预设时长内所述第二对象的姿态信息未发生变化的情况下,根据所述第二预览图像,获取第二图像。
  14. 根据权利要求11至13中任一项所述的终端设备,其中,所述目标对象包括所述第一对象或所述第二对象;所述获取模块,具体用于显示所述第一图像和所述第二图像;接收用户的第三输入,所述第三输入为用户从所述第一图像和所述第二图像中选择目标图像的输入;响应于所述第三输入,获取与所述第三输入对应的所述目标图像。
  15. 根据权利要求11至13中任一项所述的终端设备,其中,所述目标对象包括 所述第一对象和所述第二对象;所述获取模块,具体用于将所述第一图像和所述第二图像拼接,生成所述目标图像。
  16. 根据权利要求15所述的终端设备,其中,所述获取模块,具体用于将所述第一图像的第一边与所述第二图像的第二边拼接,以及将所述第一图像的第二边与所述第二图像的第一边拼接,生成所述目标图像;
    其中,所述第一图像的第一边与所述第一图像的第二边是所述第一图像中相对的两个边,所述第二图像的第一边与所述第二图像的第二边是所述第二图像中相对的两个边。
  17. 根据权利要求11至13中任一项所述的终端设备,其中,所述目标对象包括所述第一对象和所述第二对象;
    所述获取模块,具体用于将第一目标图像和第一目标对象合成,生成所述目标图像;
    其中,所述第一目标图像为所述第一图像,所述第一目标对象为所述第二对象;或者,所述第一目标图像为所述第二图像,所述第一目标对象为所述第一对象。
  18. 根据权利要求17所述的终端设备,其中,所述获取模块,具体用于将第一目标图像和第一目标对象合成,生成目标预览图像;显示所述目标预览图像;接收用户在所述第一屏幕上的第四输入;响应于所述第四输入,将所述目标预览图像作为所述目标图像。
  19. 一种终端设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至9中任一项所述的拍照方法的步骤。
  20. 一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至9中任一项所述的拍照方法的步骤。
PCT/CN2020/071823 2019-02-22 2020-01-13 拍照方法及终端设备 WO2020168859A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910133858.8 2019-02-22
CN201910133858.8A CN109831627A (zh) 2019-02-22 2019-02-22 一种拍照方法及终端设备

Publications (1)

Publication Number Publication Date
WO2020168859A1 true WO2020168859A1 (zh) 2020-08-27

Family

ID=66864190

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/071823 WO2020168859A1 (zh) 2019-02-22 2020-01-13 拍照方法及终端设备

Country Status (2)

Country Link
CN (1) CN109831627A (zh)
WO (1) WO2020168859A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109831627A (zh) * 2019-02-22 2019-05-31 维沃移动通信有限公司 一种拍照方法及终端设备
CN112383709A (zh) * 2020-11-06 2021-02-19 维沃移动通信(杭州)有限公司 图片显示方法、装置及设备

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090153691A1 (en) * 2007-12-17 2009-06-18 Kabushiki Kaisha Toshiba Imaging apparatus and imaging method
CN101646018A (zh) * 2008-08-08 2010-02-10 佛山普立华科技有限公司 摄影装置及其自拍方法
US20100321470A1 (en) * 2009-06-22 2010-12-23 Fujifilm Corporation Imaging apparatus and control method therefor
CN106791389A (zh) * 2016-12-16 2017-05-31 宇龙计算机通信科技(深圳)有限公司 图像处理方法、图像处理装置和终端
CN106878652A (zh) * 2017-02-27 2017-06-20 宇龙计算机通信科技(深圳)有限公司 图像显示方法及系统
CN106998428A (zh) * 2017-04-21 2017-08-01 维沃移动通信有限公司 一种移动终端的拍摄方法及移动终端
CN108965710A (zh) * 2018-07-26 2018-12-07 努比亚技术有限公司 照片拍摄方法、装置及计算机可读存储介质
CN109218614A (zh) * 2018-09-21 2019-01-15 深圳美图创新科技有限公司 一种移动终端的自动拍照方法及移动终端
CN109246360A (zh) * 2018-11-23 2019-01-18 维沃移动通信(杭州)有限公司 一种提示方法及移动终端
CN109831627A (zh) * 2019-02-22 2019-05-31 维沃移动通信有限公司 一种拍照方法及终端设备

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5121510B2 (ja) * 2008-03-04 2013-01-16 キヤノン株式会社 撮像システム、撮影方法、プログラム、コンピュータ読み取り可能な記憶媒体及び画像処理装置
CN104601884B (zh) * 2014-12-30 2017-09-29 广东欧珀移动通信有限公司 一种拍照控制方法及终端

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090153691A1 (en) * 2007-12-17 2009-06-18 Kabushiki Kaisha Toshiba Imaging apparatus and imaging method
CN101646018A (zh) * 2008-08-08 2010-02-10 佛山普立华科技有限公司 摄影装置及其自拍方法
US20100321470A1 (en) * 2009-06-22 2010-12-23 Fujifilm Corporation Imaging apparatus and control method therefor
CN106791389A (zh) * 2016-12-16 2017-05-31 宇龙计算机通信科技(深圳)有限公司 图像处理方法、图像处理装置和终端
CN106878652A (zh) * 2017-02-27 2017-06-20 宇龙计算机通信科技(深圳)有限公司 图像显示方法及系统
CN106998428A (zh) * 2017-04-21 2017-08-01 维沃移动通信有限公司 一种移动终端的拍摄方法及移动终端
CN108965710A (zh) * 2018-07-26 2018-12-07 努比亚技术有限公司 照片拍摄方法、装置及计算机可读存储介质
CN109218614A (zh) * 2018-09-21 2019-01-15 深圳美图创新科技有限公司 一种移动终端的自动拍照方法及移动终端
CN109246360A (zh) * 2018-11-23 2019-01-18 维沃移动通信(杭州)有限公司 一种提示方法及移动终端
CN109831627A (zh) * 2019-02-22 2019-05-31 维沃移动通信有限公司 一种拍照方法及终端设备

Also Published As

Publication number Publication date
CN109831627A (zh) 2019-05-31

Similar Documents

Publication Publication Date Title
WO2021098678A1 (zh) 投屏控制方法及电子设备
WO2021136268A1 (zh) 拍摄方法及电子设备
WO2021104195A1 (zh) 图像显示方法及电子设备
CN111541845B (zh) 图像处理方法、装置及电子设备
US11689649B2 (en) Shooting method and terminal
WO2020156466A1 (zh) 拍摄方法及终端设备
US20220279116A1 (en) Object tracking method and electronic device
WO2021082711A1 (zh) 图像显示方法及电子设备
WO2021083087A1 (zh) 截屏方法及终端设备
WO2020186964A1 (zh) 音频信号的输出方法及终端设备
WO2020151460A1 (zh) 对象处理方法及终端设备
WO2020182035A1 (zh) 图像处理方法及终端设备
WO2021121398A1 (zh) 一种视频录制方法及电子设备
WO2021036623A1 (zh) 显示方法及电子设备
WO2020238562A1 (zh) 显示方法及终端
WO2020211612A1 (zh) 信息显示方法及终端设备
WO2021057290A1 (zh) 信息控制方法及电子设备
WO2021082744A1 (zh) 视频查看方法及电子设备
US11863901B2 (en) Photographing method and terminal
WO2021129537A1 (zh) 电子设备控制方法及电子设备
WO2021115172A1 (zh) 显示方法及电子设备
WO2022028241A1 (zh) 预览封面生成方法及电子设备
WO2021082772A1 (zh) 截屏方法及电子设备
WO2021083114A1 (zh) 照片查看方法及电子设备
CN108881721B (zh) 一种显示方法及终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20759861

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20759861

Country of ref document: EP

Kind code of ref document: A1