WO2019192590A1 - 拍照方法及移动终端 - Google Patents
拍照方法及移动终端 Download PDFInfo
- Publication number
- WO2019192590A1 WO2019192590A1 PCT/CN2019/081459 CN2019081459W WO2019192590A1 WO 2019192590 A1 WO2019192590 A1 WO 2019192590A1 CN 2019081459 W CN2019081459 W CN 2019081459W WO 2019192590 A1 WO2019192590 A1 WO 2019192590A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- sub
- preview interface
- input
- mobile terminal
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000004044 response Effects 0.000 claims abstract description 45
- 239000002131 composite material Substances 0.000 claims description 57
- 238000005452 bending Methods 0.000 claims description 54
- 238000004590 computer program Methods 0.000 claims description 12
- 238000010586 diagram Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 11
- 230000000694 effects Effects 0.000 description 10
- 230000015572 biosynthetic process Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000003786 synthesis reaction Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
Definitions
- the embodiments of the present disclosure relate to the field of communications technologies, and in particular, to a photographing method and a mobile terminal.
- the mobile terminal can be used not only to beautify the image but also to perform image synthesis. For example, when the user goes out to play a better photo of the scenic spot, the mobile terminal can be used to synthesize the user photo with the scenic photo.
- the embodiments of the present disclosure provide a photographing method and a mobile terminal to solve the problem that the existing synthetic image generation process is cumbersome.
- an embodiment of the present disclosure provides a photographing method, including:
- first photographing identifier and the second photographing logo have an overlapping area of a preset area
- controlling the front camera and the rear camera to respectively acquire the first image and the second image, and displaying the a composite image of the first image and the second image
- the first sub-preview interface displays a preview image acquired by the front camera
- the second sub-preview interface displays a preview image acquired by the rear camera.
- an embodiment of the present disclosure further provides a mobile terminal, including:
- a first receiving module configured to receive a first input of the user in a state that the current interface displays a shooting preview interface
- a first display module configured to display the shooting preview interface update as a first sub preview interface and a second sub preview interface in response to the first input received by the first receiving module;
- a second receiving module configured to receive a second input of the user
- a first moving module configured to control a first shooting identifier displayed in the first sub-preview interface and a second display in the second sub-preview interface in response to the second input received by the second receiving module Second shot identification movement;
- a second display module configured to: when the first photographing identifier and the second photographing identifier have an overlapping area of a preset area, control the front camera and the rear camera to respectively acquire the first image and a second image, and displaying a composite image of the first image and the second image;
- the first sub-preview interface displays a preview image acquired by the front camera
- the second sub-preview interface displays a preview image acquired by the rear camera.
- an embodiment of the present disclosure further provides a mobile terminal, including: a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the computer program as described above The steps in the photographing method described.
- an embodiment of the present disclosure further provides a readable storage medium on which a computer program is stored, the computer program being executed by a processor to implement the steps in the photographing method as described above.
- the first input of the user is received in a state where the current interface displays the shooting preview interface; and the shooting preview interface update is displayed as the first sub preview interface and the second in response to the first input.
- Sub-preview interface receiving a second input of the user; controlling, in response to the second input, controlling the first photographing identifier displayed in the first sub-preview interface and the second photographing logo movement displayed in the second sub-preview interface And in the case that the first photographing identifier and the second photographing logo have an overlapping area of a preset area, controlling the front camera and the rear camera to respectively acquire the first image and the second image, and display a composite image of the first image and the second image; wherein the first sub-preview interface displays a preview image acquired by a front camera, and the second sub-preview interface displays a preview image acquired by a rear camera.
- the mobile terminal can control the front and rear cameras to simultaneously capture images, and control the two images captured by the front and rear cameras into one image when there are overlapping regions of the two shooting marks in the two preview interfaces.
- the process of generating a composite image is simple to operate.
- FIG. 1 is a flowchart of a photographing method provided by an embodiment of the present disclosure
- FIG. 2 is a display interface diagram of a mobile terminal according to an embodiment of the present disclosure
- FIG. 3 is a second display interface diagram of a mobile terminal according to an embodiment of the present disclosure.
- FIG. 4 is a third display interface diagram of a mobile terminal according to an embodiment of the present disclosure.
- FIG. 5 is a fourth display interface diagram of a mobile terminal according to an embodiment of the present disclosure.
- FIG. 6 is a fifth display interface diagram of a mobile terminal according to an embodiment of the present disclosure.
- FIG. 7 is a sixth diagram of a display interface diagram of a mobile terminal according to an embodiment of the present disclosure.
- FIG. 8 is a seventh display interface diagram of a mobile terminal according to an embodiment of the present disclosure.
- FIG. 9 is a eighth diagram of a display interface diagram of a mobile terminal according to an embodiment of the present disclosure.
- FIG. 10 is a ninth display interface diagram of a mobile terminal according to an embodiment of the present disclosure.
- FIG. 11 is a tenth display interface diagram of a mobile terminal according to an embodiment of the present disclosure.
- FIG. 12 is a second flowchart of a photographing method according to an embodiment of the present disclosure.
- FIG. 13 is a third flowchart of a photographing method according to an embodiment of the present disclosure.
- FIG. 14 is a structural diagram of a mobile terminal according to an embodiment of the present disclosure.
- FIG. 15 is a second structural diagram of a mobile terminal according to an embodiment of the present disclosure.
- FIG. 1 is a flowchart of a photographing method according to an embodiment of the present disclosure. As shown in FIG. 1 , the method includes the following steps:
- Step 101 Receive a first input of the user in a state where the current interface displays a shooting preview interface.
- the shooting preview interface may be an interface displayed when the mobile terminal is in the preview state during the shooting process. At this time, the preview image is displayed in the shooting preview interface.
- the user may perform a first input on the shooting preview interface, and the first input may specifically be an input triggered by a sliding operation performed on the display interface or folding the mobile terminal.
- Step 102 In response to the first input, displaying the shooting preview interface update as a first sub-preview interface and a second sub-preview interface, wherein the first sub-preview interface displays a preview image acquired by a front camera, The second sub preview interface displays a preview image acquired by the rear camera.
- the first sub-preview interface and the second sub-preview interface are respectively two independent preview interfaces, and the mobile terminal divides the original shooting preview interface into two sub-preview interfaces, and simultaneously displays two sub-preview interfaces.
- the first sub-preview interface and the second sub-preview interface may be two preview interfaces of equal or different sizes, which may be two sub-preview interfaces divided horizontally or vertically, and the sub-preview interface may be divided according to the user.
- the input method is determined.
- the first input is a sliding operation of the user on the shooting preview interface
- acquiring a sliding track of the first input if the sliding track meets a preset condition,
- the photographing preview interface update is displayed as a first sub-preview interface and a second sub-preview interface with the straight line of the sliding track as a dividing line.
- the user can perform a sliding operation on the shooting preview interface, and the mobile terminal can obtain a sliding track of the sliding operation, and determine whether the sliding track meets a preset condition that triggers the shooting preview interface to divide the sub-preview interface.
- the preset condition may be a condition that the mobile terminal presets and stores, and the condition may be that the sliding track is a straight line, the length of the sliding track is greater than a preset length, and the like.
- the mobile terminal acquires a straight line where the sliding track is located.
- the straight line where the sliding track is located may be determined according to the position of the point on the sliding track, and the line is the first sub preview. The boundary between the interface and the second sub-preview interface.
- the mobile terminal when the mobile terminal displays the shooting preview interface, the user slides from top to bottom on the shooting preview interface.
- the mobile terminal acquires a sliding track, and determines a straight line 1 where the sliding track is located, and divides the shooting preview interface into a first sub-preview interface 11 and a second sub-preview interface 12 by using a straight line 1 as a dividing line.
- the preview image in the first sub-preview interface 11 is an image captured by the front camera
- the preview image in the second sub-preview interface 12 is an image acquired by the rear camera.
- the user can trigger the mobile terminal to enter the mode of previewing both the front and the rear cameras, and control the mobile terminal to simultaneously shoot through the front camera and the rear camera, and the user operation is convenient.
- the position of the boundary line between the first sub-preview interface and the second sub-preview interface can be determined, and the user can control the position of the shooting interface according to the size of the shooting object, and the operation mode is flexible.
- the shooting preview interface update is displayed as a first sub-preview interface and a second sub-preview interface with a straight line of the sliding track as a dividing line
- the method includes: acquiring N target points on the sliding track; respectively acquiring coordinate values of each target point in a predetermined coordinate system, and calculating a variance of coordinate values of the N target points; wherein the variance is less than a preset
- the shooting preview interface update is displayed as a first sub-preview interface and a second sub-preview interface with the straight line of the sliding track as a dividing line
- the coordinate value of each target point is a coordinate value of each target point in the X direction or the Y direction of the predetermined coordinate system
- N is an integer greater than 1.
- the N target points may be feature points on the sliding track, for example, acquiring N target points of equal distance on the sliding track.
- the predetermined coordinate system may be a coordinate system set in advance by the mobile terminal, for example, a coordinate system formed by taking the short side direction of the mobile terminal as the X direction and the long side direction of the mobile terminal as the Y direction.
- the mobile terminal can acquire the coordinate values of the X direction or the Y direction of each target point in the coordinate system, and calculate the variance of the N coordinate values.
- the mobile terminal acquires N coordinate values of the N target points in the X direction, and the N coordinate values are x1dis, x2dis, and xndis, respectively, and the average coordinate value xdis of the N coordinate points can be expressed as:
- the variance d of the N coordinate values corresponding to the N target points can be calculated, and it is determined whether the variance d is smaller than a preset threshold. If the variance value is less than the preset threshold, it indicates that the coordinate values of the plurality of target points in the X direction or the Y direction are close, that is, the sliding track is a straight line, and it can be further determined whether the length of the sliding track is greater than a preset length. In the case where the length of the sliding track is greater than the preset length, the shooting preview interface update is displayed as two sub-preview interfaces. In this way, it is possible to prevent the user from operating due to an erroneous touch, thereby improving the effectiveness of the user's operation.
- the shooting preview interface update is displayed as a first sub-preview interface and a second sub-preview interface with a straight line of the sliding track as a dividing line.
- the method further includes: receiving a third input that the user drags the dividing line; and controlling the dividing line according to the third input in response to the third input Drag the direction and drag the distance to move.
- the third input may be a drag operation performed by the user on the boundary line, and the user may drag the boundary line to the left or right, or drag up or down.
- the mobile terminal controls the dividing line to move according to the user's drag direction and drag distance. For example, when the user drags 1 cm to the right, the dividing line moves 1 cm to the right. In this way, the user can adjust the size of the first sub-preview interface and the second sub-preview interface by dragging the dividing line, and the operation is convenient.
- the mobile terminal can preset the correspondence between the drag direction and the moving direction of the dividing line.
- the correspondence between the range of the drag direction and the moving direction of the dividing line can be set. For example, a range in which the drag direction is set to the right side and the direction angle is within 20 degrees is moved to the right corresponding to the boundary line. In this way, it is convenient for the user to operate quickly and reduce the operation error of the user.
- the size of the first sub-preview interface and the second sub-preview interface change with the movement of the dividing line, so that the user can change the size of the two sub-preview interfaces according to the actual shooting scene, thereby obtaining a better shooting. effect.
- the mobile terminal is a mobile terminal having a flexible screen
- the first input is an operation of the user bending the flexible screen
- the displaying the preview preview interface is displayed as a first sub-preview interface and
- the second sub-preview interface includes: obtaining a bending angle of the flexible screen; and acquiring a crease formed on the flexible screen by the first input when the bending angle is greater than a preset angle;
- the photographing preview interface update is displayed as a first sub-preview interface and a second sub-preview interface with the straight line of the crease as a dividing line.
- the mobile terminal when the mobile terminal is a mobile terminal having a flexible screen, the user can bend the flexible screen of the mobile terminal, and the mobile terminal receives the first input of the user bending the flexible screen. At this time, the flexible screen forms a certain bending angle, and a crease is formed between the two-part flexible screens forming the angle.
- the preset angle may be an angle preset by the mobile terminal.
- the mobile terminal acquires the crease.
- the mobile terminal updates the shooting preview interface as the first sub-preview interface and the second sub-preview interface along the straight line where the crease is located.
- the flexible screen of the mobile terminal is vertically bent, and the straight line 1 where the bending crease is located divides the shooting preview interface into a vertical first chapter.
- the preview interface 11 and the second sub preview interface 12 are displayed.
- the user can quickly divide the shooting preview interface into two sub-preview interfaces by bending the flexible screen, and the user is convenient to operate.
- the shooting preview interface update is displayed as a first sub-preview interface and a second sub-preview with the straight line of the bending trajectory as a boundary line.
- the method further includes: receiving a fourth input that the user bends the flexible screen; and in response to the fourth input, controlling the dividing line to move according to a bending direction of the fourth input; wherein The movement of the boundary position corresponds to the scaling of the interface area of the first sub-preview interface and the second sub-preview interface; the first bending direction of the fourth input corresponds to the first moving direction of the boundary line The second bending direction of the fourth input corresponds to a second moving direction of the boundary line.
- the user can continue to bend the flexible screen.
- the first bending direction and the second bending direction can be understood as the direction in which the flexible screen bending portion rotates around the crease, for example, clockwise bending and counterclockwise bending.
- the mobile terminal can pre-set the correspondence between the bending direction and the moving direction of the dividing line. For example, when the bending direction is clockwise, the corresponding dividing line moves to the right.
- the mobile terminal acquires a preset moving direction corresponding to the bending direction and moves the boundary line. While the dividing line moves, the sizes of the first sub-preview interface and the second sub-preview interface change with the position of the dividing line.
- the boundary line 1 divides the shooting preview interface into a first sub-preview interface 11 and a second sub-preview interface 12.
- the mobile terminal divides the dividing line 1 according to the bending direction. Move left.
- the user can control the boundary line movement by bending the flexible screen.
- the size of the first sub-preview interface and the second sub-preview interface changes with the movement of the boundary line, so that the user can follow the actual shooting scene. Change the size of the two sub-preview interfaces for better shooting.
- Step 103 Receive a second input of the user.
- the second input may be a press input, a click input, or a sliding input of the user on the shooting preview interface.
- Step 104 Control, in response to the second input, to move the first photographing identifier displayed in the first sub-preview interface and the second photographing identifier displayed in the second sub-preview interface.
- the mobile terminal controls the first photographing identifier and the second photographing logo to move toward each other in response to the second input, or controls the first photographing icon to move to the second photographing icon, or controls the second photographing icon to the first photographing.
- the icon moves until the two shot icons overlap or coincide.
- the user presses on the shooting preview interface of the mobile terminal, and the mobile terminal controls the first shooting identifier 111 and the second shooting identifier 121 to move toward each other, and the two shooting identifiers are gradually approached until overlapping, and the mobile terminal displays It is the interface as shown in FIG.
- the user slides the first photographing identifier 111 , and the first photographing logo 111 moves according to the user operation, and gradually approaches the second photographing logo 121 until the two photographing marks overlap, and the mobile terminal displays as shown in FIG. 8 .
- Step 105 In a case where the first photographing identifier and the second photographing identifier have an overlapping area of a preset area, control the front camera and the rear camera to respectively acquire the first image and the second image, And displaying a composite image of the first image and the second image.
- the preview image in the first sub-preview interface generates a first image
- the preview image in the second sub-preview interface generates a second image
- the first image and the second image are combined into one image.
- the mobile terminal may detect whether the first image includes a preset feature in the first image, where the preset feature may be a feature preset by the mobile terminal, for example, the preset feature has an eye, a nose, and a mouth.
- the feature of the first target image may be an animal.
- the user can trigger the mobile terminal to synthesize the images captured by the two sub-preview interfaces by performing the operation on the shooting preview interface, and synthesize the first target image and the second image having the preset features in the first image, which can be improved.
- the effect of image synthesis is not limited to image synthesis.
- the front camera and the rear camera controlling the front camera and the rear camera to respectively acquire the first image and the second image;
- the composite image of the face image and the second image is displayed.
- the mobile terminal may further determine whether the face image is included in the first image, and may specifically determine according to the feature of the face.
- the face image is extracted from the first image, and the extracted face image is combined with the second image. Images other than the face image in the first image are not synthesized.
- the second image may be used as a background, and the face image may be placed on a layer on the second image to obtain a composite image including the face image.
- the first image is a user's self-photographing
- the second image is a landscape photo.
- the mobile terminal automatically extracts the user's face image in the first image, and combines the face image with the landscape photo to obtain a landscape including the user's face image.
- an editing operation in which the user extracts the user's face image in the first image is not required.
- the user can complete the photo with other scenes without the assistance of others, and the user can view the self-photograph through the front camera, and the photographing effect is better.
- displaying the composite image of the face image and the second image includes: including a face image in the first image
- the face image is displayed at a preset position of the second image; receiving a fifth input that the user drags the face image; and moving the face image in response to the fifth input; And displaying a composite image of the face image and the second image; wherein the face image is located at a drag end position of the fifth input.
- the face image may be displayed at a preset position of the second image, for example, an intermediate position of the display interface, or a lower position.
- the user can move the position of the face image for better synthesis.
- the fifth input is a drag operation of the user dragging the face image, and the mobile terminal moves the face image according to the drag track of the drag operation.
- the drag operation ends.
- the position of the user's finger in the screen can be understood as the end position of the drag input, and the mobile terminal can obtain the end position of the drag input and the face.
- the image is moved to the end position of the drag input, and then the moved face image and the second image are combined into one image. For example, when the user drags the face image from the position A to the position B, the drag operation ends, and the position B is the drag end position.
- the user can drag the face image to any position in the display interface to obtain a better composite image.
- the size adjustment frame can also be generated on the face image, and the user can adjust the size of the face image to obtain a better composite image.
- the position of the face can be adjusted, and the user can move the face image by operating on the face image, the user operation is simple, and a better composite image effect can be obtained.
- the method further includes: displaying a first image editing frame and a second image editing frame, the first image editing frame Displaying the first image, displaying the second image in the second image editing frame; receiving a sixth input of the user on the first image editing frame or the second image editing frame; a sixth input, adjusting a size of the first image editing frame or the second image editing frame; wherein the first image editing frame is used to adjust a size of the first image, the second image editing frame For adjusting the size of the second image.
- the first image editing frame may be an operation frame for editing the first image
- the mobile terminal may display the first image editing frame at the edge of the first image, so that the first image is displayed in the first image editing frame.
- the user can adjust the size of the first image editing frame by operating the first image editing frame, and the first image changes as the size of the first image editing frame changes while the size of the first image editing frame changes.
- the sixth input may be an operation such as sliding or pressing by the user on the first image editing frame or the second image editing frame. By displaying the first image edit box and the second image edit box, the user can individually adjust the size of the first image or the second image.
- the edge of the first image displays the first image editing frame, and the user simultaneously performs the opposite sliding operation at different positions on the first image editing frame by using the two fingers.
- the first image editing frame is reduced, the first image is reduced.
- the user may also operate the first image editing frame to implement rotation of the first image editing frame, thereby controlling rotation of the first image.
- the mobile terminal may further display a first image editing frame in an arbitrary area of the first image, and the user may move the position of the first image editing frame in the first image, and may adjust the size of the first image editing frame, thereby intercepting the A partial image in the first image within an image edit box.
- the second image editing frame may be a frame for editing the second image
- the mobile terminal may display the second image editing frame on the edge of the second image to display the second image in the second image editing frame.
- the user can also operate the second image editing frame to adjust the size of the second image editing frame, thereby adjusting the size of the second image.
- the specific adjustment mode can be the same as the operation of the first image editing frame, and details are not described herein again.
- the user can adjust the size of the first image or the second image according to the size of the first image and the second image to adapt the size of the first image and the second image, thereby obtaining a better image synthesis effect.
- the method further includes: displaying a third image editing frame, where the composite image is displayed in the third image editing frame; Receiving a seventh input of the user on the third image editing frame; adjusting a size of the third image editing frame in response to the seventh input; wherein the third image editing frame is for adjusting the composite The size of the image.
- the third image editing frame includes a composite image of the first image and the second image, and specifically may include all or part of the image of the composite image.
- the seventh input may be an operation such as sliding or pressing performed by the user on the third image editing frame, and the mobile terminal adjusts the size of the third image editing frame according to the input of the user, thereby adjusting the size of the composite image.
- a third image editing frame may be displayed on the edge of the composite image, and the entire content of the composite image is displayed in the third image editing frame.
- the user can operate the third image editing frame to adjust the size of the third image editing frame, so that the size of the composite image increases with the size of the third image editing frame while adjusting the size of the third image editing frame. Change and change.
- the mobile terminal displays an image editing frame 2 on the interface, and the image editing frame 2 contains a composite image.
- the user uses the two fingers to perform the opposite sliding operation on different positions on the third image editing frame, the third image editing frame is reduced, and the composite image is reduced.
- the third image editing frame is enlarged and synthesized. The image is enlarged.
- the mobile terminal can quickly adjust the size of the composite image by operating the third image editing frame, and a better image effect can be obtained.
- the mobile terminal may also display a third image editing frame at any position of the composite image, and the user may operate the third image editing frame to adjust the size and position of the third image editing frame, and end the editing of the third image frame.
- the mobile terminal can intercept the image in the third image editing frame to obtain a better image effect.
- the mobile terminal displays the image editing frame 2 on the composite image, the user can move the image editing frame 2, and the size of the image editing frame 2 can be adjusted.
- the mobile terminal acquires the composite image in the image editing frame 2, and intercepts the image in the image editing frame 2.
- the user can only acquire a partial image that needs to be synthesized, and remove a portion that is not suitable for the composite image, thereby obtaining a better image effect.
- the photographing method includes the following steps:
- Step 1201 Receive a sliding operation of the user on the shooting preview interface in a state where the current interface displays a shooting preview interface.
- Step 1202 Acquire a sliding track of the sliding operation.
- Step 1203 In the case that the sliding track meets a preset condition, the shooting preview interface update is displayed as a first sub-preview interface and a second sub-preview interface with the line of the sliding track as a dividing line, wherein The first sub-preview interface displays a preview image acquired by a front camera, and the second sub-preview interface displays a preview image acquired by a rear camera.
- Step 1204 Receive a pressing operation of the user.
- Step 1205 Control, in response to the pressing operation, to move the first photographing identifier displayed in the first sub-preview interface and the second photographing identifier displayed in the second sub-preview interface.
- Step 1206 Control the front camera and the rear camera to acquire a first image and a second image, respectively, in a case where the first photographing identifier and the second photographing logo have overlapping regions of a preset area.
- Step 1207 In a case where the first image includes a face image, the face image is displayed at a preset position of the second image.
- Step 1208 Receive a drag input of the user dragging the face image.
- Step 1209 Move the face image in response to the drag input.
- Step 1210 Display a composite image of the face image and the second image, where the face image is located at a drag end position of the drag input.
- step 1201 to the step 1210 For the specific implementation of the step 1201 to the step 1210, refer to the description in the step 101 to the step 105, and details are not described herein again.
- the photographing method includes the following steps:
- Step 1301 Receive a first bending operation of bending the flexible screen by the user in a state that the current interface displays the shooting preview interface;
- Step 1302 Obtain a bending angle of the flexible screen.
- Step 1303 Acquire a crease formed on the flexible screen by the first bending operation if the bending angle is greater than a preset angle.
- Step 1304 In a case where the direction of the crease is a preset direction, the shooting preview interface update is displayed as a first sub-preview interface and a second sub-preview interface with a line of the crease as a boundary line.
- the first sub-preview interface displays a preview image acquired by the front camera
- the second sub-preview interface displays a preview image acquired by the rear camera.
- Step 1305 Receive a second bending operation of the user.
- Step 1306 Control, in response to the second bending operation, to move the first photographing identifier displayed in the first sub-preview interface and the second photographing identifier displayed in the second sub-preview interface.
- Step 1307 Control the front camera and the rear camera to acquire a first image and a second image, respectively, in a case where the first photographing identifier and the second photographing logo have an overlapping area of a preset area.
- Step 1308 Display the first image, the second image, and an image editing frame, wherein the first image and/or the second image are displayed in the image editing frame.
- Step 1309 Receive an input of a user on the image editing frame.
- Step 1310 Adjust the size of the image editing frame in response to a user operation.
- Step 1311 displaying the face image in the image editing frame and the composite image of the second image.
- the photographing method may be applied to a mobile terminal, such as a mobile phone, a tablet personal computer, a laptop computer, a personal digital assistant (PDA), and a mobile internet device.
- a mobile terminal such as a mobile phone, a tablet personal computer, a laptop computer, a personal digital assistant (PDA), and a mobile internet device.
- PDA personal digital assistant
- a mobile internet device such as a mobile Internet Device, MID
- MID Mobile Internet Device, MID
- Wearable Device a mobile internet device.
- the photographing method of the embodiment of the present disclosure receives the first input of the user in a state where the current interface displays the shooting preview interface; and in response to the first input, displays the shooting preview interface update as the first sub-preview interface and the first a second sub-preview interface; receiving a second input of the user; controlling the first photographing identifier displayed in the first sub-preview interface and the second photographing identifier displayed in the second sub-preview interface in response to the second input Moving, in a case where the first photographing identifier and the second photographing logo have an overlapping area of a preset area, controlling the front camera and the rear camera to respectively acquire the first image and the second image, and Displaying a composite image of the first image and the second image; wherein the first sub-preview interface displays a preview image acquired by a front camera, and the second sub-preview interface displays a preview image acquired by a rear camera.
- the mobile terminal can control the front and rear cameras to simultaneously capture images, and control the two images captured by the front and rear cameras into one image when there are overlapping regions of the two shooting marks in the two preview interfaces.
- the process of generating a composite image is simple to operate.
- FIG. 14 is a structural diagram of a mobile terminal according to an embodiment of the present disclosure.
- the mobile terminal has a front camera and a rear camera.
- the mobile terminal 1400 includes:
- the first receiving module 1401 is configured to receive a first input of the user in a state that the current interface displays the shooting preview interface;
- the first display module 1402 is configured to display the shooting preview interface update as a first sub preview interface and a second sub preview interface in response to the first input received by the first receiving module 1401;
- a second receiving module 1403, configured to receive a second input of the user
- the first moving module 1404 is configured to control display in the first photographing identifier and the second sub-preview interface displayed in the first sub-preview interface in response to the second input received by the second receiving module 1403 The second shot mark moves;
- the second display module 1405 is configured to control the front camera and the rear camera to respectively acquire the first image if the first photographing identifier and the second photographing identifier have an overlapping area of a preset area And a second image, and displaying a composite image of the first image and the second image;
- the first sub-preview interface displays a preview image acquired by the front camera
- the second sub-preview interface displays a preview image acquired by the rear camera.
- the first input is a sliding operation of the user on the shooting preview interface
- the first display module includes:
- a first acquiring submodule configured to acquire a sliding track of the first input
- a first display sub-module configured to: when the sliding trajectory acquired by the first acquiring sub-module meets a preset condition, update the shooting preview interface to be a boundary line of the sliding trajectory The first sub preview interface and the second sub preview interface.
- the display submodule includes:
- An acquiring unit configured to acquire N target points on the sliding track
- a calculating unit configured to respectively acquire coordinate values of each target point acquired by the acquiring unit in a predetermined coordinate system, and calculate a variance of coordinate values of the N target points;
- a first display unit configured to: when the calculating unit calculates that the variance is less than a preset threshold and the length of the sliding track is greater than a preset length, update the shooting preview interface to display the sliding track
- the straight line is the first sub preview interface and the second sub preview interface of the dividing line;
- the coordinate value of each target point is a coordinate value of each target point in the X direction or the Y direction of the predetermined coordinate system; N is an integer greater than 1.
- the mobile terminal further includes:
- a third receiving module configured to receive a third input that the user drags the dividing line
- a second moving module configured to control the dividing line to move according to the dragging direction and the dragging distance of the third input in response to the third input received by the third receiving module.
- the mobile terminal is a mobile terminal having a flexible screen, and the first input is an operation of bending a flexible screen by a user;
- the first display module includes:
- a second obtaining submodule configured to acquire a bending angle of the flexible screen
- a third obtaining sub-module configured to acquire a crease formed on the flexible screen by the first input if the bending angle acquired by the second acquiring sub-module is greater than a preset angle
- a second display sub-module configured to: when the direction of the crease obtained by the third acquisition sub-module is a preset direction, update the shooting preview interface to be a line of the crease The first sub preview interface and the second sub preview interface of the boundary.
- the mobile terminal further includes:
- a fourth receiving module configured to receive a fourth input that the user bends the flexible screen
- a third moving module configured to control, according to the fourth input received by the fourth receiving module, to move the dividing line according to a bending direction of the fourth input
- the movement of the boundary position corresponds to the scaling of the interface area of the first sub-preview interface and the second sub-preview interface; the first bending direction of the fourth input corresponds to the first boundary of the boundary line. a moving direction, the second bending direction of the fourth input corresponds to a second moving direction of the dividing line.
- the second display module includes:
- a collecting submodule configured to control the front camera and the rear camera to respectively acquire the first image and the first image when the first photographing identifier and the second photographing identifier have an overlapping area of a preset area Two images;
- a second display submodule configured to display the composite image of the face image and the second image in a case where the first image includes a face image.
- the second display submodule includes:
- a second display unit configured to display the face image at a preset position of the second image in a case where the first image includes a face image
- a receiving unit configured to receive a fifth input of the face image displayed by the user dragging the second display unit
- a mobile unit configured to move the face image in response to the fifth input received by the receiving unit
- a third display unit configured to display the face image of the moving unit and the composite image of the second image
- the face image is located at a drag end position of the fifth input.
- the mobile terminal further includes:
- a third display module configured to display a first image editing frame and a second image editing frame, wherein the first image editing frame displays the first image, and the second image editing frame displays the second image;
- a fifth receiving module configured to receive a sixth input of the user on the first image editing frame or the second image editing frame displayed by the third display module;
- a first adjustment module configured to adjust a size of the first image editing frame or the second image editing frame in response to the sixth input received by the fifth receiving module
- the first image editing frame is used to adjust the size of the first image
- the second image editing frame is used to adjust the size of the second image
- the mobile terminal further includes:
- a fourth display module configured to display a third image editing frame, where the composite image is displayed in the third image editing frame
- a sixth receiving module configured to receive a seventh input of the user on the third image editing frame displayed by the fourth display module
- a second adjustment module configured to adjust a size of the third image editing frame in response to the seventh input received by the sixth receiving module
- the third image editing frame is used to adjust the size of the composite image.
- the mobile terminal 1400 can implement various processes implemented by the mobile terminal in the foregoing method embodiments. To avoid repetition, details are not described herein again.
- the mobile terminal can control the front and rear cameras to simultaneously capture images, and control the two images taken by the front and rear cameras when there are overlapping regions of the two shooting marks in the two preview interfaces.
- the image is combined into one image, and the image is synthesized in a simple manner.
- FIG. 15 is a schematic diagram of a hardware structure of a mobile terminal that implements various embodiments of the present disclosure, the mobile terminal having a front camera and a rear camera at the same time.
- the mobile terminal 1500 includes, but is not limited to, a radio frequency unit 1501, a network module 1502, an audio output unit 1503, an input unit 1504, a sensor 1505, a display unit 1506, a user input unit 1507, an interface unit 1508, a memory 1509, a processor 1510, and Power supply 1511 and other components.
- the mobile terminal structure shown in FIG. 15 does not constitute a limitation on the mobile terminal, and the mobile terminal may include more or less components than those illustrated, or combine some components, or different components. Arrangement.
- the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a car mobile terminal, a wearable device, a pedometer, and the like.
- the processor 1510 is configured to receive a first input of the user in a state that the current interface displays a shooting preview interface; and in response to the first input, update the shooting preview interface as a first sub-preview interface and a second sub-preview interface; receiving a second input of the user; controlling the first photographing identifier displayed in the first sub-preview interface and the second photographing identifier displayed in the second sub-preview interface in response to the second input Moving, in a case where the first photographing identifier and the second photographing logo have an overlapping area of a preset area, controlling the front camera and the rear camera to respectively acquire the first image and the second image, and Displaying a composite image of the first image and the second image; wherein the first sub-preview interface displays a preview image acquired by a front camera, and the second sub-preview interface displays a preview image acquired by a rear camera.
- the mobile terminal can control the front and rear cameras to simultaneously capture images, and control the two images captured by the front and rear cameras into one image when there are overlapping regions of the two shooting marks in the two preview interfaces.
- the image is synthesized in a simple way.
- the first input is a sliding operation of the user on the shooting preview interface; the processor 1510 is further configured to: acquire a sliding track of the first input; and when the sliding track meets a preset condition Next, the shooting preview interface update is displayed as a first sub-preview interface and a second sub-preview interface with the line of the sliding track as a dividing line.
- the processor 1510 is further configured to: acquire N target points on the sliding track; respectively acquire coordinate values of each target point in a predetermined coordinate system, and calculate coordinate values of the N target points. a variance; if the variance is less than a preset threshold and the length of the sliding track is greater than a preset length, updating the shooting preview interface as a first sub-preview interface with a line of the sliding track as a dividing line And a second sub-preview interface; wherein the coordinate value of each target point is a coordinate value of each target point in the X direction or the Y direction of the predetermined coordinate system; N is an integer greater than 1.
- the processor 1510 is further configured to: receive a third input that the user drags the boundary line; and, in response to the third input, control the direction and direction of dragging the boundary line according to the third input Distance to move.
- the mobile terminal is a mobile terminal having a flexible screen
- the first input is an operation of bending a flexible screen by a user
- the processor 1510 is further configured to acquire a bending angle of the flexible screen; Obtaining a crease formed on the flexible screen by the first input when the bending angle is greater than a preset angle; and taking the shooting preview if the direction of the crease is a preset direction
- the interface update is displayed as a first sub-preview interface and a second sub-preview interface with the line of the crease as a dividing line.
- the processor 1510 is further configured to receive a fourth input that the user bends the flexible screen; and in response to the fourth input, control the dividing line to move according to a bending direction of the fourth input;
- the movement of the boundary position corresponds to the scaling of the interface area of the first sub-preview interface and the second sub-preview interface;
- the first bending direction of the fourth input corresponds to the first boundary of the boundary line.
- a moving direction, the second bending direction of the fourth input corresponds to a second moving direction of the dividing line.
- the processor 1510 is further configured to: when the first photographing identifier and the second photographing identifier have an overlapping area of a preset area, control the front camera and the rear camera to separately collect a first image and a second image; in a case where the first image includes a face image, a composite image of the face image and the second image is displayed.
- the processor 1510 is further configured to: when the first image includes a face image, display the face image at a preset position of the second image; a fifth input of the face image; moving the face image in response to the fifth input; displaying the composite image of the face image and the second image; wherein the face image is located in the first The end of the five-input drag.
- the processor 1510 is further configured to display a first image editing frame and a second image editing frame, where the first image editing frame displays the first image, and the second image editing frame displays the a second image; receiving a sixth input of the user on the first image editing frame or the second image editing frame; adjusting the first image editing frame or the second image in response to the sixth input Editing a size of the frame; wherein the first image editing frame is for adjusting a size of the first image, and the second image editing frame is for adjusting a size of the second image.
- the processor 1510 is further configured to: display a third image editing frame, the third image editing frame displays the composite image; receive a seventh input of the user on the third image editing frame; The seventh input adjusts a size of the third image editing frame; wherein the third image editing frame is used to adjust a size of the composite image.
- the radio frequency unit 1501 may be used for receiving and transmitting signals during and after receiving or transmitting information or during a call, and specifically, after receiving downlink data from the base station, processing the processor 1510; The uplink data is sent to the base station.
- radio frequency unit 1501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
- the radio unit 1501 can also communicate with the network and other devices through a wireless communication system.
- the mobile terminal provides the user with wireless broadband Internet access through the network module 1502, such as helping the user to send and receive emails, browse web pages, and access streaming media.
- the audio output unit 1503 can convert the audio data received by the radio frequency unit 1501 or the network module 1502 or stored in the memory 1509 into an audio signal and output as a sound. Moreover, the audio output unit 1503 may also provide an audio output (eg, a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the mobile terminal 1500.
- the audio output unit 1503 includes a speaker, a buzzer, a receiver, and the like.
- the input unit 1504 is for receiving an audio or video signal.
- the input unit 1504 may include a graphics processing unit (GPU) 15041 and a microphone 15042, and the graphics processor 15041 views an image of a still picture or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode.
- the data is processed.
- the processed image frame can be displayed on display unit 1506.
- the image frames processed by the graphics processor 15041 may be stored in the memory 1509 (or other storage medium) or transmitted via the radio unit 1501 or the network module 1502.
- the microphone 15042 can receive sound and can process such sound as audio data.
- the processed audio data can be converted to a format output that can be transmitted to the mobile communication base station via the radio unit 1501 in the case of a telephone call mode.
- the mobile terminal 1500 also includes at least one type of sensor 1505, such as a light sensor, motion sensor, and other sensors.
- the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 15061 according to the brightness of the ambient light, and the proximity sensor can close the display panel 15061 when the mobile terminal 1500 moves to the ear. / or backlight.
- the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the attitude of the mobile terminal (such as horizontal and vertical screen switching, related games).
- sensor 1505 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, Infrared sensors and the like are not described here.
- the display unit 1506 is for displaying information input by the user or information provided to the user.
- the display unit 1506 can include a display panel 15061.
- the display panel 15061 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
- the user input unit 1507 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the mobile terminal.
- the user input unit 1507 includes a touch panel 15071 and other input devices 15072.
- the touch panel 15071 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 15071 or near the touch panel 15071. operating).
- the touch panel 15071 can include two parts of a touch detection device and a touch controller.
- the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
- the processor 1510 receives the commands from the processor 1510 and executes them.
- the touch panel 15071 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
- the user input unit 1507 can also include other input devices 15072.
- the other input devices 15072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control button, a switch button, etc.), a trackball, a mouse, and a joystick, which are not described herein.
- the touch panel 15071 can be overlaid on the display panel 15061.
- the touch panel 15071 detects a touch operation thereon or nearby, the touch panel 15071 transmits to the processor 1510 to determine the type of the touch event, and then the processor 1510 according to the touch.
- the type of event provides a corresponding visual output on display panel 15061.
- the touch panel 15071 and the display panel 15061 are used as two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 15071 and the display panel 15061 may be integrated. The input and output functions of the mobile terminal are implemented, and are not limited herein.
- the interface unit 1508 is an interface in which an external device is connected to the mobile terminal 1500.
- the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
- the interface unit 1508 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 1500 or can be used at the mobile terminal 1500 and externally Data is transferred between devices.
- an external device eg, data information, power, etc.
- Memory 1509 can be used to store software programs as well as various data.
- the memory 1509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the mobile phone (such as audio data, phone book, etc.).
- memory 1509 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
- the processor 1510 is a control center of the mobile terminal that connects various portions of the entire mobile terminal using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 1509, and recalling data stored in the memory 1509.
- the mobile terminal performs various functions and processing data to perform overall monitoring on the mobile terminal.
- the processor 1510 can include one or more processing units; optionally, the processor 1510 can integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application, etc., and a modulation solution
- the processor mainly handles wireless communication. It will be appreciated that the above described modem processor may also not be integrated into the processor 1510.
- the mobile terminal 1500 can also include a power source 1511 (such as a battery) for powering various components.
- a power source 1511 such as a battery
- the power source 1511 can be logically coupled to the processor 1510 through a power management system to manage charging, discharging, and power consumption through the power management system. Management and other functions.
- the mobile terminal 1500 includes some functional modules not shown, and details are not described herein again.
- an embodiment of the present disclosure further provides a mobile terminal, including a processor 1510, a memory 1509, a computer program stored on the memory 1509 and executable on the processor 1510, the computer program being executed by the processor 1510.
- a mobile terminal including a processor 1510, a memory 1509, a computer program stored on the memory 1509 and executable on the processor 1510, the computer program being executed by the processor 1510.
- the embodiment of the present disclosure further provides a computer readable storage medium.
- the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, implements various processes of the photographing method embodiment, and can achieve the same technical effect. To avoid repetition, we will not repeat them here.
- the computer readable storage medium such as a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
- the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
- Implementation Based on such understanding, the technical solution of the present disclosure, which is essential or contributes to the related art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk, CD-ROM).
- the instructions include a number of instructions for causing a mobile terminal (which may be a cell phone, computer, server, air conditioner, or network device, etc.) to perform the methods described in various embodiments of the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- User Interface Of Digital Computer (AREA)
- Telephone Function (AREA)
Abstract
本公开提供一种拍照方法及移动终端。该方法包括:在当前界面显示拍摄预览界面的状态下,接收用户的第一输入;响应于第一输入,将拍摄预览界面更新显示为第一子预览界面和第二子预览界面;接收用户的第二输入;响应于第二输入,控制第一子预览界面中显示的第一拍摄标识和第二子预览界面中显示的第二拍摄标识移动;在第一拍摄标识和第二拍摄标识重叠的情况下,控制前置摄像头和后置摄像头分别采集第一图像和第二图像,并显示第一图像和第二图像的合成图像。
Description
相关申请的交叉引用
本申请主张在2018年4月4日在中国提交的中国专利申请号No.201810295832.9的优先权,其全部内容通过引用包含于此。
本公开实施例涉及通信技术领域,尤其涉及一种拍照方法及移动终端。
随着移动终端的快速发展,移动终端已经成为人们必不可少的图像处理工具。移动终端不仅可以用于美化图像,还能够进行图像合成。例如,当用户外出游玩拍摄到较好的景区照片时,可以使用移动终端将用户照片与景区照片进行图像合成。
在相关技术中,当用户需要获得合成的图像时,用户首先需要拍摄图像,并将图像存储在相册中,然后利用图像合成软件,在相册中选择需要合成的两张图像,并将两张图像合成为一张图像,操作繁琐。
发明内容
本公开实施例提供一种拍照方法及移动终端,以解决现有的合成图像的生成过程操作繁琐的问题。
为了解决上述技术问题,本公开是这样实现的:
第一方面,本公开实施例提供了一种拍照方法,包括:
在当前界面显示拍摄预览界面的状态下,接收用户的第一输入;
响应于所述第一输入,将所述拍摄预览界面更新显示为第一子预览界面和第二子预览界面;
接收用户的第二输入;
响应于所述第二输入,控制所述第一子预览界面中显示的第一拍摄标识和所述第二子预览界面中显示的第二拍摄标识移动;
在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一图像和第二图像,并显示所述第一图像和所述第二图像的合成图像;
其中,所述第一子预览界面显示前置摄像头采集的预览图像,所述第二子预览界面显示后置摄像头采集的预览图像。
第二方面,本公开实施例还提供一种移动终端,包括:
第一接收模块,用于在当前界面显示拍摄预览界面的状态下,接收用户的第一输入;
第一显示模块,用于响应于所述第一接收模块接收的所述第一输入,将所述拍摄预览界面更新显示为第一子预览界面和第二子预览界面;
第二接收模块,用于接收用户的第二输入;
第一移动模块,用于响应于所述第二接收模块接收的所述第二输入,控制所述第一子预览界面中显示的第一拍摄标识和所述第二子预览界面中显示的第二拍摄标识移动;
第二显示模块,用于在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一图像和第二图像,并显示所述第一图像和所述第二图像的合成图像;
其中,所述第一子预览界面显示前置摄像头采集的预览图像,所述第二子预览界面显示后置摄像头采集的预览图像。
第三方面,本公开实施例还提供一种移动终端,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上所述的拍照方法中的步骤。
第四方面,本公开实施例还提供一种可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如上所述的拍照方法中的步骤。
在本公开实施例中,在当前界面显示拍摄预览界面的状态下,接收用户的第一输入;响应于所述第一输入,将所述拍摄预览界面更新显示为第一子预览界面和第二子预览界面;接收用户的第二输入;响应于所述第二输入,控制所述第一子预览界面中显示的第一拍摄标识和所述第二子预览界面中显 示的第二拍摄标识移动;在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一图像和第二图像,并显示所述第一图像和所述第二图像的合成图像;其中,所述第一子预览界面显示前置摄像头采集的预览图像,所述第二子预览界面显示后置摄像头采集的预览图像。这样,移动终端可以控制前置和后置摄像头同时拍摄图像,并在两个预览界面中的两个拍摄标识存在重叠区域时,控制前置和后置摄像头拍摄的两张图像合成为一张图像,合成图像的生成过程操作简单。
图1是本公开实施例提供的拍照方法的流程图之一;
图2是本公开实施例提供的移动终端的显示界面图之一;
图3是本公开实施例提供的移动终端的显示界面图之二;
图4是本公开实施例提供的移动终端的显示界面图之三;
图5是本公开实施例提供的移动终端的显示界面图之四;
图6是本公开实施例提供的移动终端的显示界面图之五;
图7是本公开实施例提供的移动终端的显示界面图之六;
图8是本公开实施例提供的移动终端的显示界面图之七;
图9是本公开实施例提供的移动终端的显示界面图之八;
图10是本公开实施例提供的移动终端的显示界面图之九;
图11是本公开实施例提供的移动终端的显示界面图之十;
图12是本公开实施例提供的拍照方法的流程图之二;
图13是本公开实施例提供的拍照方法的流程图之三;
图14是本公开实施例提供的移动终端的结构图之一;
图15是本公开实施例提供的移动终端的结构图之二。
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是 全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
参见图1,图1是本公开实施例提供的拍照方法的流程图,如图1所示,包括以下步骤:
步骤101、在当前界面显示拍摄预览界面的状态下,接收用户的第一输入。
其中,拍摄预览界面可以是移动终端在拍摄过程中,处于预览状态时显示的界面,此时,拍摄预览界面内显示预览图像。用户可以在拍摄预览界面上进行第一输入,第一输入具体可以是通过在显示界面上进行的滑动操作或者对移动终端进行折叠而触发的输入。
步骤102、响应于所述第一输入,将所述拍摄预览界面更新显示为第一子预览界面和第二子预览界面,其中,所述第一子预览界面显示前置摄像头采集的预览图像,所述第二子预览界面显示后置摄像头采集的预览图像。
在此步骤中,第一子预览界面和第二子预览界面分别为两个独立的预览界面,移动终端将原拍摄预览界面划分为两个子预览界面,并同时显示两个子预览界面。
第一子预览界面和第二子预览界面可以是大小相等或大小不等的两个预览界面,可以是横向或竖向划分的两个子预览界面,子预览界面的划分尺寸和划分方式可以根据用户的输入方式确定。
具体地,在所述第一输入为用户在所述拍摄预览界面上的滑动操作的情况下,获取所述第一输入的滑动轨迹;在所述滑动轨迹满足预设条件的情况下,将所述拍摄预览界面更新显示为以所述滑动轨迹所在直线为分界线的第一子预览界面和第二子预览界面。
在该实施方式中,用户可以在拍摄预览界面上进行滑动操作,移动终端可以获取滑动操作的滑动轨迹,并判断滑动轨迹是否满足触发将拍摄预览界面划分子预览界面的预设条件。其中,预设条件可以是移动终端预先设置并存储的条件,该条件具体可以是滑动轨迹为直线,滑动轨迹的长度大于预设长度等等。
在滑动轨迹满足预设条件的情况下,移动终端获取滑动轨迹所在的直线, 在具体实施时,可以根据滑动轨迹上的点的分布位置,确定滑动轨迹所在的直线,该直线为第一子预览界面和第二子预览界面的分界线。
例如,如图2所示,当移动终端显示拍摄预览界面时,用户在拍摄预览界面上从上到下滑动。如图3所示,移动终端获取滑动轨迹,并确定滑动轨迹所在的直线1,并以直线1为分界线,将拍摄预览界面划分为第一子预览界面11和第二子预览界面12。其中,第一子预览界面11中的预览画面为前置摄像头采集的图像,第二子预览界面12中的预览画面为后置摄像头采集的图像。
这样,用户通过在拍摄预览界面进行操作,即可以触发移动终端进入前置和后置摄像头同时预览的模式,并控制移动终端通过前置摄像头和后置摄像头同时进行拍摄,用户操作便捷。根据用户的滑动轨迹,可确定第一子预览界面和第二子预览界面的分界线的位置,且用户可以根据拍摄对象的大小控制拍摄界面划分的位置,操作方式灵活。
具体地,所述在所述滑动轨迹满足预设条件的情况下,将所述拍摄预览界面更新显示为以所述滑动轨迹所在直线为分界线的第一子预览界面和第二子预览界面,包括:获取所述滑动轨迹上的N个目标点;分别获取每个目标点在预定坐标系中的坐标值,并计算所述N个目标点的坐标值的方差;在所述方差小于预设阈值且所述滑动轨迹的长度大于预设长度的情况下,将所述拍摄预览界面更新显示为以所述滑动轨迹所在直线为分界线的第一子预览界面和第二子预览界面;其中,每个目标点的坐标值为每个目标点在所述预定坐标系的X方向或Y方向上的坐标值;N为大于1的整数。
其中,N个目标点可以是滑动轨迹上的特征点,例如,在滑动轨迹上获取等距离的N个目标点。预定坐标系可以是移动终端预先设定的坐标系,例如,以移动终端的短边方向作为X方向,以移动终端的长边方向作为Y方向,形成的坐标系。移动终端可以获取每个目标点在坐标系中的X方向或者Y方向的坐标值,并计算N个坐标值的方差。
为了便于理解,以计算每个目标点在坐标系的X方向的坐标值为例,并结合图进行说明。
如图4所示,移动终端获取N个目标点在X方向的N个坐标值,N个坐 标值分别为x1dis、x2dis、xndis,则N个坐标点的平均坐标值xdis可以表达为:
设N个目标点的坐标值的方差为d,则d可以表达为:
通过上述计算公式,可以计算N个目标点对应的N个坐标值的方差d,并判断方差d是否小于预设阈值。若方差值小于预设阈值,则表示多个目标点在X方向或者在Y方向上的坐标值接近,即滑动轨迹为直线,可以进一步判断滑动轨迹的长度是否大于预设长度。在滑动轨迹的长度大于预设长度的情况下,将拍摄预览界面更新显示为两个子预览界面。这样,可以防止用户由于误触摸而导致的误操作,提高用户操作的有效性。
可选地,所述在所述滑动轨迹满足预设条件的情况下,将所述拍摄预览界面更新显示为以所述滑动轨迹所在直线为分界线的第一子预览界面和第二子预览界面之后,所述接收用户的第二输入之前,所述方法还包括:接收用户拖动所述分界线的第三输入;响应于所述第三输入,控制所述分界线按照所述第三输入的拖动方向和拖动距离进行移动。
其中,第三输入可以是用户在分界线上进行的拖动操作,用户可以将分界线向左或向右拖动,也可以向上或者向下拖动。移动终端控制分界线按照用户的拖动方向和拖动距离移动。例如,用户向右拖动1厘米时,分界线向右移动1厘米。这样,用户通过拖动分界线,即可调节第一子预览界面和第二子预览界面的大小,操作便捷。
在此步骤之前,移动终端可以预先设置拖动方向与分界线移动方向之间的对应关系,在具体实施时,可以设置拖动方向的范围与分界线移动方向之间的对应关系。例如,设置拖动方向为正右方以及方向角度偏离20°以内的范围均对应分界线向右移动。这样,便于用户快速操作,减小用户的操作误差。
当分界线移动时,第一子预览界面和第二子预览界面的大小随着分界线的移动而变化,这样,用户可以根据实际的拍摄场景改变两个子预览界面的 大小,从而获得更好的拍摄效果。
在所述移动终端为具有柔性屏的移动终端,所述第一输入为用户弯折所述柔性屏的操作的情况下,所述将所述拍摄预览界面更新显示为第一子预览界面和第二子预览界面,包括:获取所述柔性屏的弯折角度;在所述弯折角度大于预设角度的情况下,获取所述第一输入在所述柔性屏上形成的折痕;在所述折痕的方向为预设方向的情况下,将所述拍摄预览界面更新显示为以所述折痕所在直线为分界线的第一子预览界面和第二子预览界面。
在该实施方式中,当移动终端为具有柔性屏的移动终端时,用户可以对移动终端的柔性屏进行弯折,移动终端接收用户弯折柔性屏的第一输入。此时,柔性屏形成一定的弯折角度,且形成角度的两部分柔性屏之间形成折痕。
上述预设角度可以是移动终端预先设置的角度,在移动终端的柔性屏的弯折角度大于预设角度的情况下,移动终端获取折痕。在折痕的方向为预设方向的情况下,移动终端沿着折痕所在的直线,将拍摄预览界面更新显示为第一子预览界面和第二子预览界面。
例如,如图5所示,当移动终端显示为拍摄预览界面时,将移动终端的柔性屏进行竖向弯折,弯折折痕所在的直线1将拍摄预览界面划分为竖向的第一子预览界面11和第二子预览界面12。
这样,用户通过将柔性屏进行弯折即可以快速将拍摄预览界面划分为两个子预览界面,用户操作便捷。
可选地,所述在所述弯折轨迹为直线轨迹的情况下,将所述拍摄预览界面更新显示为以所述弯折轨迹所在直线为分界线的第一子预览界面和第二子预览界面之后,所述方法还包括:接收用户弯折所述柔性屏的第四输入;响应于所述第四输入,控制所述分界线按照所述第四输入的弯折方向进行移动;其中,所述分界线位置的移动对应所述第一子预览界面和所述第二子预览界面的界面面积的缩放;所述第四输入的第一弯折方向对应所述分界线的第一移动方向,所述第四输入的第二弯折方向对应所述分界线的第二移动方向。
在将拍摄预览界面更新显示为两个子预览界面后,用户可以继续对柔性屏进行弯折。其中,第一弯折方向和第二弯折方向可以理解为柔性屏弯折部分绕着折痕旋转的方向,例如,顺时针弯折、逆时针弯折。移动终端可以预 先设置弯折方向与分界线移动方向之间的对应关系,例如,当弯折方向为顺时针时对应分界线向右移动。当用户将柔性屏弯折时,移动终端获取预设的与弯折方向对应的移动方向,移动分界线。在分界线移动的同时,第一子预览界面和第二子预览界面的大小随着分界线的位置而改变。
例如,如图6所示,分界线1将拍摄预览界面划分为第一子预览界面11和第二子预览界面12,当用户顺时针旋转柔性屏,移动终端根据弯折方向将分界线1向左移动。
这样,用户可以通过弯折柔性屏控制分界线移动,当分界线移动时,第一子预览界面和第二子预览界面的大小随着分界线的移动而变化,这样,用户可以根据实际的拍摄场景改变两个子预览界面的大小,从而获得更好的拍摄效果。
步骤103、接收用户的第二输入。
其中,第二输入可以是用户在拍摄预览界面的按压输入、点击输入或滑动输入等。
步骤104、响应于所述第二输入,控制所述第一子预览界面中显示的第一拍摄标识和所述第二子预览界面中显示的第二拍摄标识移动。
在此步骤中,移动终端响应于第二输入,控制第一拍摄标识和第二拍摄标识同时相向移动,或者控制第一拍摄图标向第二拍摄图标移动,或者控制第二拍摄图标向第一拍摄图标移动,直至两个拍摄图标重叠或重合。
例如,如图7所示,用户在移动终端的拍摄预览界面上进行按压,移动终端控制第一拍摄标识111和第二拍摄标识121相向移动,两个拍摄标识逐步靠近,直至重叠,移动终端显示为如图8所示的界面。
又如,如图9所示,用户滑动第一拍摄标识111,第一拍摄标识111根据用户操作移动,并逐渐靠近第二拍摄标识121,直至两个拍摄标识重叠,移动终端显示为如图8所示的界面。
步骤105、在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一图像和第二图像,并显示所述第一图像和所述第二图像的合成图像。
在此步骤中,将第一子预览界面中的预览图像生成第一图像,将第二子 预览界面中的预览图像生成第二图像,并将第一图像和第二图像合成为一张图像。
在具体实施时,移动终端可以检测第一图像中是否包含具有预设特征的第一目标图像,其中预设特征可以是移动终端预先设置的特征,例如,预设特征为具有眼睛、鼻子和嘴巴的特征,则第一目标图像可以是动物。当第一图像中包含第一目标图像时,将第一目标图像和第二图像进行合成。
这样,用户通过在拍摄预览界面进行操作即可以触发移动终端将两个子预览界面拍摄的图像进行合成,且将第一图像中具有预设特征的第一目标图像与第二图像进行合成,可以提高图像合成的效果。
进一步地,在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一图像和第二图像;在所述第一图像中包含人脸图像的情况下,显示所述人脸图像和所述第二图像的合成图像。
在该实施方式中,移动终端可以进一步判断第一图像中是否包含人脸图像,具体可以根据人脸的特征进行判断。当第一图像中包含人脸图像时,将人脸图像从第一图像中提取出来,并将提取出来的人脸图像与第二图像进行图像合成。第一图像中除人脸图像之外的图像则不进行合成。在具体实施时,可以将第二图像作为背景,将人脸图像置于第二图像上一图层,可以获得包含人脸图像的合成图像。
例如,第一图像为用户自拍照,第二图像为风景照,移动终端自动提取第一图像中的用户人脸图像,将人脸图像与风景照进行图像合成,获得包含用户人脸图像的风景照,不需要用户对第一图像中的用户人脸图像进行提取的编辑操作。
这样,用户不需要他人协助即可以完成与其他景物的合照,且用户可以通过前置摄像头查看自拍照,拍照效果较好。
进一步地,所述在所述第一图像中包含人脸图像的情况下,显示所述人脸图像和所述第二图像的合成图像,包括:在所述第一图像中包含人脸图像的情况下,在所述第二图像的预设位置处显示所述人脸图像;接收用户拖动所述人脸图像的第五输入;响应于所述第五输入,移动所述人脸图像;显示 所述人脸图像和所述第二图像的合成图像;其中,所述人脸图像位于所述第五输入的拖动结束位置。
在该实施方式中,移动终端将人脸图像从第一图像中提取出来后,可以将人脸图像显示在第二图像的预设位置,例如,显示界面的中间位置,或者下方位置。用户可以对人脸图像的位置进行移动,以获得更好的合成效果。其中,第五输入为用户拖动人脸图像的拖动操作,移动终端根据拖动操作的拖动轨迹移动人脸图像。当用户手指离开屏幕时,拖动操作结束,当拖动操作结束时用户手指在屏幕中的位置可以理解为拖动输入的结束位置,移动终端可以获取拖动输入的结束位置,并将人脸图像移动至拖动输入的结束位置处,然后将移动后的人脸图像和第二图像合成为一张图像。例如,用户将人脸图像从位置A拖动至位置B时,拖动操作结束,则位置B为拖动结束位置。
例如,如图10所示,用户可以拖动人脸图像至显示界面中的任意位置,从而获得更好的合成图像。
在具体实施时,还可以在人脸图像上生成大小调节框,用户可以调节人脸图像的大小,从而获得更好的合成图像。
本实施方式中,人脸的位置可调整,用户可以通过在人脸图像上进行操作,移动人脸图像的位置,用户操作简单,且能够获得更好的合成图像效果。
可选地,所述显示所述第一图像和所述第二图像的合成图像之前,所述方法还包括:显示第一图像编辑框和第二图像编辑框,所述第一图像编辑框中显示所述第一图像,所述第二图像编辑框中显示所述第二图像;接收用户在所述第一图像编辑框或所述第二图像编辑框上的第六输入;响应于所述第六输入,调节所述第一图像编辑框或所述第二图像编辑框的尺寸;其中,所述第一图像编辑框用于调节所述第一图像的尺寸,所述第二图像编辑框用于调节所述第二图像的尺寸。
其中,第一图像编辑框可以是用于对第一图像进行编辑的操作框,移动终端可以将第一图像编辑框显示在第一图像的边缘,使第一图像显示在第一图像编辑框内。用户通过对第一图像编辑框进行操作即可以调节第一图像编辑框的尺寸,在第一图像编辑框的尺寸改变的同时,第一图像随着第一图像编辑框的尺寸的改变而改变。第六输入可以是用户在第一图像编辑框或第二 图像编辑框上进行的滑动或者按压等操作。通过显示第一图像编辑框和第二图像编辑框,用户可以单独调节第一图像或第二图像的尺寸。
例如,第一图像的边缘显示第一图像编辑框,用户用双指同时在第一图像编辑框上的不同位置进行相向滑动操作,第一图像编辑框缩小的同时,第一图像随之缩小。
在具体实施时,用户还可以对第一图像编辑框进行操作,实现对第一图像编辑框的旋转,从而控制对第一图像的旋转。
另外,移动终端还可以在第一图像的任意区域显示第一图像编辑框,用户可以移动第一图像编辑框在第一图像中的位置,且可以调节第一图像编辑框的大小,从而截取第一图像编辑框内的第一图像中的部分图像。
第二图像编辑框可以是用于对第二图像进行编辑的框,移动终端可以将第二图像编辑框显示在第二图像的边缘,使第二图像显示在第二图像编辑框内。用户也可以对第二图像编辑框进行操作调节第二图像编辑框的尺寸,从而调节第二图像的尺寸。具体的调节方式可以同对第一图像编辑框的操作,此处不再赘述。
用户可以根据第一图像和第二图像的尺寸大小,对第一图像或第二图像的尺寸进行调整,使第一图像和第二图像的大小相适应,从而获得更好的图像合成效果。
可选地,所述显示所述第一图像和所述第二图像的合成图像之后,所述方法还包括:显示第三图像编辑框,所述第三图像编辑框中显示所述合成图像;接收用户在所述第三图像编辑框上的第七输入;响应于所述第七输入,调节所述第三图像编辑框的尺寸;其中,所述第三图像编辑框用于调节所述合成图像的尺寸。
其中,第三图像编辑框内包括第一图像和第二图像的合成图像,具体可以包括合成图像的全部或者部分图像。第七输入可以是用户在第三图像编辑框上进行的滑动或者按压等操作,移动终端根据用户的输入,调节第三图像编辑框的大小,从而调节合成图像的大小。
在具体实施时,可以在合成图像的边缘显示第三图像编辑框,第三图像编辑框内显示合成图像的全部内容。此时,用户可以对第三图像编辑框进行 操作调节第三图像编辑框的大小,这样,在调节第三图像编辑框的大小的同时,合成图像的大小随着第三图像编辑框的大小的变化而变化。
例如,如图11所示,移动终端在界面上显示图像编辑框2,图像编辑框2内包含合成图像。用户用双指分别在第三图像编辑框上的不同位置进行相向滑动操作,第三图像编辑框缩小,合成图像随之缩小,用户的双指相背滑动时,第三图像编辑框放大,合成图像随之放大。
这样,移动终端通过对第三图像编辑框进行操作即可以快速实现对合成图像大小的调节,可以获得更好的图像效果。
另外,移动终端还可以在合成图像的任意位置显示第三图像编辑框,用户可以对第三图像编辑框进行操作,从而调节第三图像编辑框的大小和位置,当结束对第三图像编辑框进行调节时,移动终端可以截取第三图像编辑框内的图像,从而获得更好的图像效果。
例如,如图11所示,移动终端在合成图像上显示图像编辑框2,用户可以对图像编辑框2进行移动,且可以对图像编辑框2的大小进行调节。当用户确定图像编辑框2的大小和位置后,用户可以在图像编辑框2上进行按压输入,移动终端获取图像编辑框2内的合成图像,并截取图像编辑框2内的图像。
这样,用户可以仅获取需要合成的部分图像,而将不适应于合成图像的部分去除,从而获得更好的图像效果。
为了便于理解本方案,以下结合流程图对本公开的具体实施例进行举例说明。
如图12所示,拍照方法包括以下步骤:
步骤1201、在当前界面显示拍摄预览界面的状态下,接收用户在所述拍摄预览界面上的滑动操作。
步骤1202、获取所述滑动操作的滑动轨迹。
步骤1203、在所述滑动轨迹满足预设条件的情况下,将所述拍摄预览界面更新显示为以所述滑动轨迹所在直线为分界线的第一子预览界面和第二子预览界面,其中,所述第一子预览界面显示前置摄像头采集的预览图像,所述第二子预览界面显示后置摄像头采集的预览图像。
步骤1204、接收用户的按压操作。
步骤1205、响应于所述按压操作,控制所述第一子预览界面中显示的第一拍摄标识和所述第二子预览界面中显示的第二拍摄标识相向移动。
步骤1206、在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一图像和第二图像。
步骤1207、在所述第一图像中包含人脸图像的情况下,在所述第二图像的预设位置处显示所述人脸图像。
步骤1208、接收用户拖动所述人脸图像的拖动输入。
步骤1209、响应于所述拖动输入,移动所述人脸图像。
步骤1210、显示所述人脸图像和所述第二图像的合成图像,其中,所述人脸图像位于所述拖动输入的拖动结束位置。
其中,步骤1201至步骤1210的具体实施方式可以参见步骤101至步骤105中的描述,此处不再赘述。
当移动终端为具有柔性屏的移动终端时,如图13所示,拍照方法包括以下步骤:
步骤1301、在当前界面显示拍摄预览界面的状态下,接收用户弯折所述柔性屏的第一弯折操作;
步骤1302、获取所述柔性屏的弯折角度。
步骤1303、在所述弯折角度大于预设角度的情况下,获取所述第一弯折操作在所述柔性屏上形成的折痕。
步骤1304、在所述折痕的方向为预设方向的情况下,将所述拍摄预览界面更新显示为以所述折痕所在直线为分界线的第一子预览界面和第二子预览界面,其中,所述第一子预览界面显示前置摄像头采集的预览图像,所述第二子预览界面显示后置摄像头采集的预览图像。
步骤1305、接收用户的第二弯折操作。
步骤1306、响应于所述第二弯折操作,控制所述第一子预览界面中显示的第一拍摄标识和所述第二子预览界面中显示的第二拍摄标识移动。
步骤1307、在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠 区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一图像和第二图像。
步骤1308、显示所述第一图像、所述第二图像和图像编辑框,其中,所述图像编辑框中显示所述第一图像和/或所述第二图像。
步骤1309、接收用户在所述图像编辑框上的输入;
步骤1310、响应于用户的操作,调节所述图像编辑框的尺寸。
步骤1311、显示所述图像编辑框内的所述人脸图像和所述第二图像的合成图像。
其中,步骤1301至步骤1311的具体实施方式可以参见步骤101至步骤105中的描述,此处不再赘述。
本公开实施例中,上述拍照方法可以应用于移动终端,例如:手机、平板电脑(Tablet Personal Computer)、膝上型电脑(Laptop Computer)、个人数字助理(personal digital assistant,PDA)、移动上网装置(Mobile Internet Device,MID)或可穿戴式设备(Wearable Device)等。
本公开实施例的拍照方法,在当前界面显示拍摄预览界面的状态下,接收用户的第一输入;响应于所述第一输入,将所述拍摄预览界面更新显示为第一子预览界面和第二子预览界面;接收用户的第二输入;响应于所述第二输入,控制所述第一子预览界面中显示的第一拍摄标识和所述第二子预览界面中显示的第二拍摄标识移动;在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一图像和第二图像,并显示所述第一图像和所述第二图像的合成图像;其中,所述第一子预览界面显示前置摄像头采集的预览图像,所述第二子预览界面显示后置摄像头采集的预览图像。这样,移动终端可以控制前置和后置摄像头同时拍摄图像,并在两个预览界面中的两个拍摄标识存在重叠区域时,控制前置和后置摄像头拍摄的两张图像合成为一张图像,合成图像的生成过程操作简单。
参见图14,图14是本公开实施例提供的移动终端的结构图,所述移动终端同时具有前置摄像头和后置摄像头,如图14所示,移动终端1400包括:
第一接收模块1401,用于在当前界面显示拍摄预览界面的状态下,接收 用户的第一输入;
第一显示模块1402,用于响应于所述第一接收模块1401接收的所述第一输入,将所述拍摄预览界面更新显示为第一子预览界面和第二子预览界面;
第二接收模块1403,用于接收用户的第二输入;
第一移动模块1404,用于响应于所述第二接收模块1403接收的所述第二输入,控制所述第一子预览界面中显示的第一拍摄标识和所述第二子预览界面中显示的第二拍摄标识移动;
第二显示模块1405,用于在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一图像和第二图像,并显示所述第一图像和所述第二图像的合成图像;
其中,所述第一子预览界面显示前置摄像头采集的预览图像,所述第二子预览界面显示后置摄像头采集的预览图像。
可选地,所述第一输入为用户在所述拍摄预览界面上的滑动操作;
所述第一显示模块包括:
第一获取子模块,用于获取所述第一输入的滑动轨迹;
第一显示子模块,用于在所述第一获取子模块获取的所述滑动轨迹满足预设条件的情况下,将所述拍摄预览界面更新显示为以所述滑动轨迹所在直线为分界线的第一子预览界面和第二子预览界面。
可选地,所述显示子模块包括:
获取单元,用于获取所述滑动轨迹上的N个目标点;
计算单元,用于分别获取所述获取单元获取的每个目标点在预定坐标系中的坐标值,并计算所述N个目标点的坐标值的方差;
第一显示单元,用于在所述计算单元计算所述方差小于预设阈值且所述滑动轨迹的长度大于预设长度的情况下,将所述拍摄预览界面更新显示为以所述滑动轨迹所在直线为分界线的第一子预览界面和第二子预览界面;
其中,每个目标点的坐标值为每个目标点在所述预定坐标系的X方向或Y方向上的坐标值;N为大于1的整数。
可选地,所述移动终端还包括:
第三接收模块,用于接收用户拖动所述分界线的第三输入;
第二移动模块,用于响应于所述第三接收模块接收的所述第三输入,控制所述分界线按照所述第三输入的拖动方向和拖动距离进行移动。
可选地,所述移动终端为具有柔性屏的移动终端,所述第一输入为用户弯折所述柔性屏的操作;
所述第一显示模块包括:
第二获取子模块,用于获取所述柔性屏的弯折角度;
第三获取子模块,用于在所述第二获取子模块获取的所述弯折角度大于预设角度的情况下,获取所述第一输入在所述柔性屏上形成的折痕;
第二显示子模块,用于在所述第三获取子模块获取的所述折痕的方向为预设方向的情况下,将所述拍摄预览界面更新显示为以所述折痕所在直线为分界线的第一子预览界面和第二子预览界面。
可选地,所述移动终端还包括:
第四接收模块,用于接收用户弯折所述柔性屏的第四输入;
第三移动模块,用于响应于所述第四接收模块接收的所述第四输入,控制所述分界线按照所述第四输入的弯折方向进行移动;
其中,所述分界线位置的移动对应所述第一子预览界面和所述第二子预览界面的界面面积的缩放;所述第四输入的第一弯折方向对应所述分界线的第一移动方向,所述第四输入的第二弯折方向对应所述分界线的第二移动方向。
可选地,第二显示模块包括:
采集子模块,用于在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一图像和第二图像;
第二显示子模块,用于在所述第一图像中包含人脸图像的情况下,显示所述人脸图像和所述第二图像的合成图像。
可选地,第二显示子模块包括:
第二显示单元,用于在所述第一图像中包含人脸图像的情况下,在所述第二图像的预设位置处显示所述人脸图像;
接收单元,用于接收用户拖动所述第二显示单元显示的所述人脸图像的 第五输入;
移动单元,用于响应于所述接收单元接收的所述第五输入,移动所述人脸图像;
第三显示单元,用于显示移动单元移动的所述人脸图像和所述第二图像的合成图像;
其中,所述人脸图像位于所述第五输入的拖动结束位置。
可选地,所述移动终端还包括:
第三显示模块,用于显示第一图像编辑框和第二图像编辑框,所述第一图像编辑框中显示所述第一图像,所述第二图像编辑框中显示所述第二图像;
第五接收模块,用于接收用户在所述第三显示模块显示的所述第一图像编辑框或所述第二图像编辑框上的第六输入;
第一调节模块,用于响应于所述第五接收模块接收的所述第六输入,调节所述第一图像编辑框或所述第二图像编辑框的尺寸;
其中,所述第一图像编辑框用于调节所述第一图像的尺寸,所述第二图像编辑框用于调节所述第二图像的尺寸。
可选地,所述移动终端还包括:
第四显示模块,用于显示第三图像编辑框,所述第三图像编辑框中显示所述合成图像;
第六接收模块,用于接收用户在所述第四显示模块显示的所述第三图像编辑框上的第七输入;
第二调节模块,用于响应于所述第六接收模块接收的所述第七输入,调节所述第三图像编辑框的尺寸;
其中,所述第三图像编辑框用于调节所述合成图像的尺寸。
移动终端1400能够实现上述方法实施例中移动终端实现的各个过程,为避免重复,这里不再赘述。
本公开实施例的移动终端1400,移动终端可以控制前置和后置摄像头同时拍摄图像,并在两个预览界面中的两个拍摄标识存在重叠区域时,控制前置和后置摄像头拍摄的两张图像合成为一张图像,图像合成方式简单。
图15为实现本公开各个实施例的一种移动终端的硬件结构示意图,所述 移动终端同时具有前置摄像头和后置摄像头。该移动终端1500包括但不限于:射频单元1501、网络模块1502、音频输出单元1503、输入单元1504、传感器1505、显示单元1506、用户输入单元1507、接口单元1508、存储器1509、处理器1510、以及电源1511等部件。本领域技术人员可以理解,图15中示出的移动终端结构并不构成对移动终端的限定,移动终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本公开实施例中,移动终端包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载移动终端、可穿戴设备、以及计步器等。
其中,处理器1510,用于在当前界面显示拍摄预览界面的状态下,接收用户的第一输入;响应于所述第一输入,将所述拍摄预览界面更新显示为第一子预览界面和第二子预览界面;接收用户的第二输入;响应于所述第二输入,控制所述第一子预览界面中显示的第一拍摄标识和所述第二子预览界面中显示的第二拍摄标识移动;在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一图像和第二图像,并显示所述第一图像和所述第二图像的合成图像;其中,所述第一子预览界面显示前置摄像头采集的预览图像,所述第二子预览界面显示后置摄像头采集的预览图像。
这样,移动终端可以控制前置和后置摄像头同时拍摄图像,并在两个预览界面中的两个拍摄标识存在重叠区域时,控制前置和后置摄像头拍摄的两张图像合成为一张图像,图像合成方式简单。
可选地,所述第一输入为用户在所述拍摄预览界面上的滑动操作;处理器1510还用于,获取所述第一输入的滑动轨迹;在所述滑动轨迹满足预设条件的情况下,将所述拍摄预览界面更新显示为以所述滑动轨迹所在直线为分界线的第一子预览界面和第二子预览界面。
可选地,处理器1510还用于,获取所述滑动轨迹上的N个目标点;分别获取每个目标点在预定坐标系中的坐标值,并计算所述N个目标点的坐标值的方差;在所述方差小于预设阈值且所述滑动轨迹的长度大于预设长度的情况下,将所述拍摄预览界面更新显示为以所述滑动轨迹所在直线为分界线的第一子预览界面和第二子预览界面;其中,每个目标点的坐标值为每个目 标点在所述预定坐标系的X方向或Y方向上的坐标值;N为大于1的整数。
可选地,处理器1510还用于,接收用户拖动所述分界线的第三输入;响应于所述第三输入,控制所述分界线按照所述第三输入的拖动方向和拖动距离进行移动。
可选地,所述移动终端为具有柔性屏的移动终端,所述第一输入为用户弯折所述柔性屏的操作;处理器1510还用于,获取所述柔性屏的弯折角度;在所述弯折角度大于预设角度的情况下,获取所述第一输入在所述柔性屏上形成的折痕;在所述折痕的方向为预设方向的情况下,将所述拍摄预览界面更新显示为以所述折痕所在直线为分界线的第一子预览界面和第二子预览界面。
可选地,处理器1510还用于,接收用户弯折所述柔性屏的第四输入;响应于所述第四输入,控制所述分界线按照所述第四输入的弯折方向进行移动;其中,所述分界线位置的移动对应所述第一子预览界面和所述第二子预览界面的界面面积的缩放;所述第四输入的第一弯折方向对应所述分界线的第一移动方向,所述第四输入的第二弯折方向对应所述分界线的第二移动方向。
可选地,处理器1510还用于,在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一图像和第二图像;在所述第一图像中包含人脸图像的情况下,显示所述人脸图像和所述第二图像的合成图像。
可选地,处理器1510还用于,在所述第一图像中包含人脸图像的情况下,在所述第二图像的预设位置处显示所述人脸图像;接收用户拖动所述人脸图像的第五输入;响应于所述第五输入,移动所述人脸图像;显示所述人脸图像和所述第二图像的合成图像;其中,所述人脸图像位于所述第五输入的拖动结束位置。
可选地,处理器1510还用于,显示第一图像编辑框和第二图像编辑框,所述第一图像编辑框中显示所述第一图像,所述第二图像编辑框中显示所述第二图像;接收用户在所述第一图像编辑框或所述第二图像编辑框上的第六输入;响应于所述第六输入,调节所述第一图像编辑框或所述第二图像编辑框的尺寸;其中,所述第一图像编辑框用于调节所述第一图像的尺寸,所述 第二图像编辑框用于调节所述第二图像的尺寸。
可选地,处理器1510还用于,显示第三图像编辑框,所述第三图像编辑框中显示所述合成图像;接收用户在所述第三图像编辑框上的第七输入;响应于所述第七输入,调节所述第三图像编辑框的尺寸;其中,所述第三图像编辑框用于调节所述合成图像的尺寸。
应理解的是,本公开实施例中,射频单元1501可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器1510处理;另外,将上行的数据发送给基站。通常,射频单元1501包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元1501还可以通过无线通信系统与网络和其他设备通信。
移动终端通过网络模块1502为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元1503可以将射频单元1501或网络模块1502接收的或者在存储器1509中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元1503还可以提供与移动终端1500执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元1503包括扬声器、蜂鸣器以及受话器等。
输入单元1504用于接收音频或视频信号。输入单元1504可以包括图形处理器(Graphics Processing Unit,GPU)15041和麦克风15042,图形处理器15041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元1506上。经图形处理器15041处理后的图像帧可以存储在存储器1509(或其它存储介质)中或者经由射频单元1501或网络模块1502进行发送。麦克风15042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元1501发送到移动通信基站的格式输出。
移动终端1500还包括至少一种传感器1505,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板15061的亮度,接 近传感器可在移动终端1500移动到耳边时,关闭显示面板15061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别移动终端姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器1505还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元1506用于显示由用户输入的信息或提供给用户的信息。显示单元1506可包括显示面板15061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板15061。
用户输入单元1507可用于接收输入的数字或字符信息,以及产生与移动终端的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元1507包括触控面板15071以及其他输入设备15072。触控面板15071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板15071上或在触控面板15071附近的操作)。触控面板15071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1510,接收处理器1510发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板15071。除了触控面板15071,用户输入单元1507还可以包括其他输入设备15072。具体地,其他输入设备15072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
进一步的,触控面板15071可覆盖在显示面板15061上,当触控面板15071检测到在其上或附近的触摸操作后,传送给处理器1510以确定触摸事件的类型,随后处理器1510根据触摸事件的类型在显示面板15061上提供相应的视觉输出。虽然在图15中,触控面板15071与显示面板15061是作为两个独立的部件来实现移动终端的输入和输出功能,但是在某些实施例中,可以将触 控面板15071与显示面板15061集成而实现移动终端的输入和输出功能,具体此处不做限定。
接口单元1508为外部装置与移动终端1500连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元1508可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端1500内的一个或多个元件或者可以用于在移动终端1500和外部装置之间传输数据。
存储器1509可用于存储软件程序以及各种数据。存储器1509可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器1509可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器1510是移动终端的控制中心,利用各种接口和线路连接整个移动终端的各个部分,通过运行或执行存储在存储器1509内的软件程序和/或模块,以及调用存储在存储器1509内的数据,执行移动终端的各种功能和处理数据,从而对移动终端进行整体监控。处理器1510可包括一个或多个处理单元;可选地,处理器1510可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1510中。
移动终端1500还可以包括给各个部件供电的电源1511(比如电池),可选地,电源1511可以通过电源管理系统与处理器1510逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,移动终端1500包括一些未示出的功能模块,在此不再赘述。
可选地,本公开实施例还提供一种移动终端,包括处理器1510,存储器1509,存储在存储器1509上并可在所述处理器1510上运行的计算机程序, 该计算机程序被处理器1510执行时实现上述拍照方法实施例中的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本公开实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述拍照方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台移动终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本公开各个实施例所述的方法。
上面结合附图对本公开的实施例进行了描述,但是本公开并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本公开的启示下,在不脱离本公开宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本公开的保护之内。
Claims (22)
- 一种拍照方法,应用于包括前置摄像头和后置摄像头的移动终端,其中,所述拍照方法包括:在当前界面显示拍摄预览界面的状态下,接收用户的第一输入;响应于所述第一输入,将所述拍摄预览界面更新显示为第一子预览界面和第二子预览界面;接收用户的第二输入;响应于所述第二输入,控制所述第一子预览界面中显示的第一拍摄标识和所述第二子预览界面中显示的第二拍摄标识移动;在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一图像和第二图像,并显示所述第一图像和所述第二图像的合成图像;其中,所述第一子预览界面显示前置摄像头采集的预览图像,所述第二子预览界面显示后置摄像头采集的预览图像。
- 根据权利要求1所述的方法,其中,所述第一输入为用户在所述拍摄预览界面上的滑动操作;所述将所述拍摄预览界面更新显示为第一子预览界面和第二子预览界面,包括:获取所述第一输入的滑动轨迹;在所述滑动轨迹满足预设条件的情况下,将所述拍摄预览界面更新显示为以所述滑动轨迹所在直线为分界线的第一子预览界面和第二子预览界面。
- 根据权利要求2所述的方法,其中,所述在所述滑动轨迹满足预设条件的情况下,将所述拍摄预览界面更新显示为以所述滑动轨迹所在直线为分界线的第一子预览界面和第二子预览界面,包括:获取所述滑动轨迹上的N个目标点;分别获取每个目标点在预定坐标系中的坐标值,并计算所述N个目标点的坐标值的方差;在所述方差小于预设阈值且所述滑动轨迹的长度大于预设长度的情况 下,将所述拍摄预览界面更新显示为以所述滑动轨迹所在直线为分界线的第一子预览界面和第二子预览界面;其中,每个目标点的坐标值为每个目标点在所述预定坐标系的X方向或Y方向上的坐标值;N为大于1的整数。
- 根据权利要求2所述的方法,其中,所述在所述滑动轨迹满足预设条件的情况下,将所述拍摄预览界面更新显示为以所述滑动轨迹所在直线为分界线的第一子预览界面和第二子预览界面之后,所述接收用户的第二输入之前,所述方法还包括:接收用户拖动所述分界线的第三输入;响应于所述第三输入,控制所述分界线按照所述第三输入的拖动方向和拖动距离进行移动。
- 根据权利要求1所述的方法,其中,所述移动终端为具有柔性屏的移动终端,所述第一输入为用户弯折所述柔性屏的操作;所述将所述拍摄预览界面更新显示为第一子预览界面和第二子预览界面,包括:获取所述柔性屏的弯折角度;在所述弯折角度大于预设角度的情况下,获取所述第一输入在所述柔性屏上形成的折痕;在所述折痕的方向为预设方向的情况下,将所述拍摄预览界面更新显示为以所述折痕所在直线为分界线的第一子预览界面和第二子预览界面。
- 根据权利要求5所述的方法,其中,所述在所述弯折轨迹为直线轨迹的情况下,将所述拍摄预览界面更新显示为以所述弯折轨迹所在直线为分界线的第一子预览界面和第二子预览界面之后,所述方法还包括:接收用户弯折所述柔性屏的第四输入;响应于所述第四输入,控制所述分界线按照所述第四输入的弯折方向进行移动;其中,所述分界线位置的移动对应所述第一子预览界面和所述第二子预览界面的界面面积的缩放;所述第四输入的第一弯折方向对应所述分界线的第一移动方向,所述第四输入的第二弯折方向对应所述分界线的第二移动方 向。
- 根据权利要求1所述的方法,其中,所述在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一图像和第二图像,并显示所述第一图像和所述第二图像的合成图像,包括:在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一图像和第二图像;在所述第一图像中包含人脸图像的情况下,显示所述人脸图像和所述第二图像的合成图像。
- 根据权利要求7所述的方法,其中,所述在所述第一图像中包含人脸图像的情况下,显示所述人脸图像和所述第二图像的合成图像,包括:在所述第一图像中包含人脸图像的情况下,在所述第二图像的预设位置处显示所述人脸图像;接收用户拖动所述人脸图像的第五输入;响应于所述第五输入,移动所述人脸图像;显示所述人脸图像和所述第二图像的合成图像;其中,所述人脸图像位于所述第五输入的拖动结束位置。
- 根据权利要求1所述的方法,其中,所述显示所述第一图像和所述第二图像的合成图像之前,所述方法还包括:显示第一图像编辑框和第二图像编辑框,所述第一图像编辑框中显示所述第一图像,所述第二图像编辑框中显示所述第二图像;接收用户在所述第一图像编辑框或所述第二图像编辑框上的第六输入;响应于所述第六输入,调节所述第一图像编辑框或所述第二图像编辑框的尺寸;其中,所述第一图像编辑框用于调节所述第一图像的尺寸,所述第二图像编辑框用于调节所述第二图像的尺寸。
- 根据权利要求1所述的方法,其中,所述显示所述第一图像和所述第二图像的合成图像之后,所述方法还包括:显示第三图像编辑框,所述第三图像编辑框中显示所述合成图像;接收用户在所述第三图像编辑框上的第七输入;响应于所述第七输入,调节所述第三图像编辑框的尺寸;其中,所述第三图像编辑框用于调节所述合成图像的尺寸。
- 一种移动终端,所述移动终端具有前置摄像头和后置摄像头,其中,所述移动终端包括:第一接收模块,用于在当前界面显示拍摄预览界面的状态下,接收用户的第一输入;第一显示模块,用于响应于所述第一接收模块接收的所述第一输入,将所述拍摄预览界面更新显示为第一子预览界面和第二子预览界面;第二接收模块,用于接收用户的第二输入;第一移动模块,用于响应于所述第二接收模块接收的所述第二输入,控制所述第一子预览界面中显示的第一拍摄标识和所述第二子预览界面中显示的第二拍摄标识移动;第二显示模块,用于在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一图像和第二图像,并显示所述第一图像和所述第二图像的合成图像;其中,所述第一子预览界面显示前置摄像头采集的预览图像,所述第二子预览界面显示后置摄像头采集的预览图像。
- 根据权利要求11所述的移动终端,其中,所述第一输入为用户在所述拍摄预览界面上的滑动操作;所述第一显示模块包括:第一获取子模块,用于获取所述第一输入的滑动轨迹;第一显示子模块,用于在所述第一获取子模块获取的所述滑动轨迹满足预设条件的情况下,将所述拍摄预览界面更新显示为以所述滑动轨迹所在直线为分界线的第一子预览界面和第二子预览界面。
- 根据权利要求12所述的移动终端,其中,所述显示子模块包括:获取单元,用于获取所述滑动轨迹上的N个目标点;计算单元,用于分别获取所述获取单元获取的每个目标点在预定坐标系中的坐标值,并计算所述N个目标点的坐标值的方差;第一显示单元,用于在所述计算单元计算所述方差小于预设阈值且所述滑动轨迹的长度大于预设长度的情况下,将所述拍摄预览界面更新显示为以所述滑动轨迹所在直线为分界线的第一子预览界面和第二子预览界面;其中,每个目标点的坐标值为每个目标点在所述预定坐标系的X方向或Y方向上的坐标值;N为大于1的整数。
- 根据权利要求12所述的移动终端,还包括:第三接收模块,用于接收用户拖动所述分界线的第三输入;第二移动模块,用于响应于所述第三接收模块接收的所述第三输入,控制所述分界线按照所述第三输入的拖动方向和拖动距离进行移动。
- 根据权利要求11所述的移动终端,其中,所述移动终端为具有柔性屏的移动终端,所述第一输入为用户弯折所述柔性屏的操作;所述第一显示模块包括:第二获取子模块,用于获取所述柔性屏的弯折角度;第三获取子模块,用于在所述第二获取子模块获取的所述弯折角度大于预设角度的情况下,获取所述第一输入在所述柔性屏上形成的折痕;第二显示子模块,用于在所述第三获取子模块获取的所述折痕的方向为预设方向的情况下,将所述拍摄预览界面更新显示为以所述折痕所在直线为分界线的第一子预览界面和第二子预览界面。
- 根据权利要求15所述的移动终端,还包括:第四接收模块,用于接收用户弯折所述柔性屏的第四输入;第三移动模块,用于响应于所述第四接收模块接收的所述第四输入,控制所述分界线按照所述第四输入的弯折方向进行移动;其中,所述分界线位置的移动对应所述第一子预览界面和所述第二子预览界面的界面面积的缩放;所述第四输入的第一弯折方向对应所述分界线的第一移动方向,所述第四输入的第二弯折方向对应所述分界线的第二移动方向。
- 根据权利要求11所述的移动终端,其中,第二显示模块包括:采集子模块,用于在所述第一拍摄标识和所述第二拍摄标识有预设面积的重叠区域的情况下,控制所述前置摄像头和所述后置摄像头分别采集第一 图像和第二图像;第二显示子模块,用于在所述第一图像中包含人脸图像的情况下,显示所述人脸图像和所述第二图像的合成图像。
- 根据权利要求17所述的移动终端,其中,第二显示子模块包括:第二显示单元,用于在所述第一图像中包含人脸图像的情况下,在所述第二图像的预设位置处显示所述人脸图像;接收单元,用于接收用户拖动所述第二显示单元显示的所述人脸图像的第五输入;移动单元,用于响应于所述接收单元接收的所述第五输入,移动所述人脸图像;第三显示单元,用于显示移动单元移动的所述人脸图像和所述第二图像的合成图像;其中,所述人脸图像位于所述第五输入的拖动结束位置。
- 根据权利要求11所述的移动终端,还包括:第三显示模块,用于显示第一图像编辑框和第二图像编辑框,所述第一图像编辑框中显示所述第一图像,所述第二图像编辑框中显示所述第二图像;第五接收模块,用于接收用户在所述第三显示模块显示的所述第一图像编辑框或所述第二图像编辑框上的第六输入;第一调节模块,用于响应于所述第五接收模块接收的所述第六输入,调节所述第一图像编辑框或所述第二图像编辑框的尺寸;其中,所述第一图像编辑框用于调节所述第一图像的尺寸,所述第二图像编辑框用于调节所述第二图像的尺寸。
- 根据权利要求11所述的移动终端,还包括:第四显示模块,用于显示第三图像编辑框,所述第三图像编辑框中显示所述合成图像;第六接收模块,用于接收用户在所述第四显示模块显示的所述第三图像编辑框上的第七输入;第二调节模块,用于响应于所述第六接收模块接收的所述第七输入,调节所述第三图像编辑框的尺寸;其中,所述第三图像编辑框用于调节所述合成图像的尺寸。
- 一种移动终端,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至10任一项所述的拍照方法中的步骤。
- 一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至10任一项所述的拍照方法中的步骤。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
ES19780822T ES2959643T3 (es) | 2018-04-04 | 2019-04-04 | Método de fotografía y terminal móvil |
EP19780822.3A EP3780577B1 (en) | 2018-04-04 | 2019-04-04 | Photography method and mobile terminal |
US17/037,410 US11115591B2 (en) | 2018-04-04 | 2020-09-29 | Photographing method and mobile terminal |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810295832.9A CN108234891B (zh) | 2018-04-04 | 2018-04-04 | 一种拍照方法及移动终端 |
CN201810295832.9 | 2018-04-04 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/037,410 Continuation US11115591B2 (en) | 2018-04-04 | 2020-09-29 | Photographing method and mobile terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019192590A1 true WO2019192590A1 (zh) | 2019-10-10 |
Family
ID=62657511
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/081459 WO2019192590A1 (zh) | 2018-04-04 | 2019-04-04 | 拍照方法及移动终端 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11115591B2 (zh) |
EP (1) | EP3780577B1 (zh) |
CN (1) | CN108234891B (zh) |
ES (1) | ES2959643T3 (zh) |
WO (1) | WO2019192590A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111741223A (zh) * | 2020-07-17 | 2020-10-02 | 北京搜房科技发展有限公司 | 一种全景图像拍摄方法、装置和系统 |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108234891B (zh) * | 2018-04-04 | 2019-11-05 | 维沃移动通信有限公司 | 一种拍照方法及移动终端 |
CN109062483B (zh) * | 2018-07-27 | 2021-02-19 | 维沃移动通信有限公司 | 一种图像处理方法及终端设备 |
CN108898555B (zh) * | 2018-07-27 | 2022-09-23 | 维沃移动通信有限公司 | 一种图像处理方法及终端设备 |
CN109917956B (zh) | 2019-02-22 | 2021-08-03 | 华为技术有限公司 | 一种控制屏幕显示的方法和电子设备 |
CN112153272B (zh) * | 2019-06-28 | 2022-02-25 | 华为技术有限公司 | 一种图像拍摄方法与电子设备 |
WO2021083146A1 (zh) | 2019-10-30 | 2021-05-06 | 北京字节跳动网络技术有限公司 | 视频处理方法、装置、终端及存储介质 |
CN110784674B (zh) | 2019-10-30 | 2022-03-15 | 北京字节跳动网络技术有限公司 | 视频处理的方法、装置、终端及存储介质 |
CN111064889A (zh) * | 2019-12-25 | 2020-04-24 | 惠州Tcl移动通信有限公司 | 终端设备的拍摄方法、终端设备及存储介质 |
CN111401459A (zh) * | 2020-03-24 | 2020-07-10 | 谷元(上海)文化科技有限责任公司 | 一种动画人物形态变化视觉捕捉系统 |
CN111885285B (zh) * | 2020-06-29 | 2021-11-23 | 维沃移动通信(杭州)有限公司 | 图像拍摄方法及电子设备 |
CN111669506A (zh) * | 2020-07-01 | 2020-09-15 | 维沃移动通信有限公司 | 拍照方法、装置及电子设备 |
CN111757003A (zh) * | 2020-07-01 | 2020-10-09 | Oppo广东移动通信有限公司 | 拍摄处理方法、装置、移动终端以及存储介质 |
CN112529778B (zh) * | 2020-11-24 | 2023-05-30 | 展讯通信(上海)有限公司 | 多摄像头设备的图像拼接方法及装置、存储介质、终端 |
CN112738402B (zh) * | 2020-12-30 | 2022-03-15 | 维沃移动通信(杭州)有限公司 | 拍摄方法、装置、电子设备及介质 |
CN115473996B (zh) * | 2021-06-11 | 2024-04-05 | 荣耀终端有限公司 | 一种视频拍摄方法及电子设备 |
CN113794829B (zh) * | 2021-08-02 | 2023-11-10 | 维沃移动通信(杭州)有限公司 | 拍摄方法、装置及电子设备 |
CN113810627B (zh) * | 2021-08-12 | 2023-12-19 | 惠州Tcl云创科技有限公司 | 视频处理方法、装置、移动终端及可读存储介质 |
CN117425057A (zh) * | 2022-07-07 | 2024-01-19 | 抖音视界(北京)有限公司 | 用于影像拍摄的方法、装置、设备和存储介质 |
CN115334246A (zh) | 2022-09-06 | 2022-11-11 | 抖音视界有限公司 | 用于影像拍摄的方法、装置、设备和存储介质 |
CN116074639A (zh) * | 2023-01-17 | 2023-05-05 | 北京达佳互联信息技术有限公司 | 图像生成方法、装置、电子设备及存储介质 |
CN116952252B (zh) * | 2023-09-21 | 2024-02-27 | 深圳库犸科技有限公司 | 移动路径的调整方法、终端设备及介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140098883A1 (en) * | 2012-10-09 | 2014-04-10 | Nokia Corporation | Method and apparatus for video coding |
CN104423946A (zh) * | 2013-08-30 | 2015-03-18 | 联想(北京)有限公司 | 一种图像处理方法以及电子设备 |
CN105100642A (zh) * | 2015-07-30 | 2015-11-25 | 努比亚技术有限公司 | 图像处理方法和装置 |
CN106303229A (zh) * | 2016-08-04 | 2017-01-04 | 努比亚技术有限公司 | 一种拍照方法及装置 |
CN108234891A (zh) * | 2018-04-04 | 2018-06-29 | 维沃移动通信有限公司 | 一种拍照方法及移动终端 |
Family Cites Families (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4232498B2 (ja) * | 2003-03-24 | 2009-03-04 | 富士ゼロックス株式会社 | 被写体撮影状態判定装置、画質調整装置、及び画像撮影装置 |
JP2005094741A (ja) * | 2003-08-14 | 2005-04-07 | Fuji Photo Film Co Ltd | 撮像装置及び画像合成方法 |
US7724296B2 (en) * | 2006-06-21 | 2010-05-25 | Sony Ericsson Mobile Communications Ab | Device and method for adjusting image orientation |
US8244068B2 (en) * | 2007-03-28 | 2012-08-14 | Sony Ericsson Mobile Communications Ab | Device and method for adjusting orientation of a data representation displayed on a display |
JP5218353B2 (ja) * | 2009-09-14 | 2013-06-26 | ソニー株式会社 | 情報処理装置、表示方法及びプログラム |
CN201792814U (zh) | 2010-06-09 | 2011-04-13 | 德尔福技术有限公司 | 全方位泊车辅助系统 |
US9584735B2 (en) * | 2010-11-12 | 2017-02-28 | Arcsoft, Inc. | Front and back facing cameras |
US8520080B2 (en) * | 2011-01-31 | 2013-08-27 | Hand Held Products, Inc. | Apparatus, system, and method of use of imaging assembly on mobile terminal |
KR101764372B1 (ko) * | 2011-04-19 | 2017-08-03 | 삼성전자주식회사 | 휴대용 단말기에서 영상 합성 방법 및 장치 |
CN102420898A (zh) | 2011-09-27 | 2012-04-18 | 惠州Tcl移动通信有限公司 | 一种基于手机的全景照相实现方法及手机 |
EP2779620B8 (en) * | 2011-11-07 | 2016-09-28 | Sony Interactive Entertainment Inc. | Image generation device, and image generation method |
US8866943B2 (en) * | 2012-03-09 | 2014-10-21 | Apple Inc. | Video camera providing a composite video sequence |
CN103024272A (zh) * | 2012-12-14 | 2013-04-03 | 广东欧珀移动通信有限公司 | 移动终端的双摄像头控制装置、方法、系统以及移动终端 |
KR102032347B1 (ko) * | 2013-02-26 | 2019-10-15 | 삼성전자 주식회사 | 이미지 센서 위치를 이용한 이미지 영역 설정 장치 및 방법 |
KR102081932B1 (ko) * | 2013-03-21 | 2020-04-14 | 엘지전자 주식회사 | 디스플레이 장치 및 그 제어 방법 |
KR102089432B1 (ko) * | 2013-06-20 | 2020-04-14 | 엘지전자 주식회사 | 이동 단말기 및 이의 제어방법 |
KR102063102B1 (ko) * | 2013-08-19 | 2020-01-07 | 엘지전자 주식회사 | 이동 단말기 및 그것의 제어방법 |
US10055013B2 (en) * | 2013-09-17 | 2018-08-21 | Amazon Technologies, Inc. | Dynamic object tracking for user interfaces |
KR102153436B1 (ko) * | 2014-01-15 | 2020-09-08 | 엘지전자 주식회사 | 이동단말기 및 그 제어방법 |
US20150213303A1 (en) * | 2014-01-28 | 2015-07-30 | Nvidia Corporation | Image processing with facial reference images |
CN105025215B (zh) * | 2014-04-23 | 2019-09-24 | 南京中兴新软件有限责任公司 | 一种终端基于多摄像头实现合照的方法及装置 |
KR20160033507A (ko) * | 2014-09-18 | 2016-03-28 | 엘지전자 주식회사 | 이동 단말기 및 그 제어 방법 |
KR20160056582A (ko) * | 2014-11-12 | 2016-05-20 | 엘지전자 주식회사 | 이동 단말기 및 이의 제어 방법 |
CN105578028A (zh) * | 2015-07-28 | 2016-05-11 | 宇龙计算机通信科技(深圳)有限公司 | 一种拍照方法及终端 |
KR101678861B1 (ko) * | 2015-07-28 | 2016-11-23 | 엘지전자 주식회사 | 이동단말기 및 그 제어방법 |
CN106998248B (zh) | 2016-01-26 | 2020-09-15 | 中兴通讯股份有限公司 | 一种信号发送方法和用户设备 |
CN105827952B (zh) * | 2016-02-01 | 2019-05-17 | 维沃移动通信有限公司 | 一种去除指定对象的拍照方法及移动终端 |
CN105872365A (zh) * | 2016-03-29 | 2016-08-17 | 努比亚技术有限公司 | 移动终端拍照方法及装置 |
KR20170112492A (ko) * | 2016-03-31 | 2017-10-12 | 엘지전자 주식회사 | 이동 단말기 및 그 제어방법 |
CN105912253B (zh) * | 2016-04-05 | 2021-03-02 | Oppo广东移动通信有限公司 | 一种虚拟拍照按键的触发方法、装置及移动终端 |
CN106027900A (zh) * | 2016-06-22 | 2016-10-12 | 维沃移动通信有限公司 | 一种拍照方法及移动终端 |
CN106598425A (zh) * | 2016-11-16 | 2017-04-26 | 深圳市海派通讯科技有限公司 | 智能移动终端拍照方法 |
KR20180095331A (ko) * | 2017-02-17 | 2018-08-27 | 엘지전자 주식회사 | 이동단말기 및 그 제어 방법 |
CN107368150A (zh) * | 2017-06-30 | 2017-11-21 | 维沃移动通信有限公司 | 一种拍照方法及移动终端 |
CN107395969B (zh) * | 2017-07-26 | 2019-12-03 | 维沃移动通信有限公司 | 一种拍摄方法及移动终端 |
CN107509028B (zh) * | 2017-08-10 | 2019-07-26 | 维沃移动通信有限公司 | 一种拍摄方法、移动终端和计算机可读存储介质 |
CN107613196A (zh) * | 2017-09-05 | 2018-01-19 | 珠海格力电器股份有限公司 | 一种自拍方法及其装置、电子设备 |
CN107807772A (zh) * | 2017-10-17 | 2018-03-16 | 广东欧珀移动通信有限公司 | 图像数据处理方法、装置及移动终端 |
-
2018
- 2018-04-04 CN CN201810295832.9A patent/CN108234891B/zh active Active
-
2019
- 2019-04-04 ES ES19780822T patent/ES2959643T3/es active Active
- 2019-04-04 WO PCT/CN2019/081459 patent/WO2019192590A1/zh unknown
- 2019-04-04 EP EP19780822.3A patent/EP3780577B1/en active Active
-
2020
- 2020-09-29 US US17/037,410 patent/US11115591B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140098883A1 (en) * | 2012-10-09 | 2014-04-10 | Nokia Corporation | Method and apparatus for video coding |
CN104423946A (zh) * | 2013-08-30 | 2015-03-18 | 联想(北京)有限公司 | 一种图像处理方法以及电子设备 |
CN105100642A (zh) * | 2015-07-30 | 2015-11-25 | 努比亚技术有限公司 | 图像处理方法和装置 |
CN106303229A (zh) * | 2016-08-04 | 2017-01-04 | 努比亚技术有限公司 | 一种拍照方法及装置 |
CN108234891A (zh) * | 2018-04-04 | 2018-06-29 | 维沃移动通信有限公司 | 一种拍照方法及移动终端 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111741223A (zh) * | 2020-07-17 | 2020-10-02 | 北京搜房科技发展有限公司 | 一种全景图像拍摄方法、装置和系统 |
CN111741223B (zh) * | 2020-07-17 | 2022-04-26 | 北京搜房科技发展有限公司 | 一种全景图像拍摄方法、装置和系统 |
Also Published As
Publication number | Publication date |
---|---|
US11115591B2 (en) | 2021-09-07 |
CN108234891B (zh) | 2019-11-05 |
EP3780577A4 (en) | 2021-08-18 |
US20210014415A1 (en) | 2021-01-14 |
ES2959643T3 (es) | 2024-02-27 |
EP3780577B1 (en) | 2023-09-13 |
CN108234891A (zh) | 2018-06-29 |
EP3780577A1 (en) | 2021-02-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019192590A1 (zh) | 拍照方法及移动终端 | |
CN108668083B (zh) | 一种拍照方法及终端 | |
US20210281669A1 (en) | Shooting method and terminal | |
WO2021136268A1 (zh) | 拍摄方法及电子设备 | |
JP7203859B2 (ja) | 画像処理方法及びフレキシブルスクリーン端末 | |
WO2019174628A1 (zh) | 拍照方法及移动终端 | |
CN108989672B (zh) | 一种拍摄方法及移动终端 | |
WO2021115479A1 (zh) | 显示控制方法及电子设备 | |
CN111010508B (zh) | 一种拍摄方法及电子设备 | |
WO2021036623A1 (zh) | 显示方法及电子设备 | |
CN109683777B (zh) | 一种图像处理方法及终端设备 | |
CN109194839B (zh) | 一种显示控制方法、终端和计算机可读存储介质 | |
CN109102555B (zh) | 一种图像编辑方法及终端 | |
CN111147752B (zh) | 变焦倍数调节方法、电子设备及介质 | |
WO2021017730A1 (zh) | 截图方法及终端设备 | |
CN109413333B (zh) | 一种显示控制方法及终端 | |
CN109448069B (zh) | 一种模板生成方法及移动终端 | |
CN108881721B (zh) | 一种显示方法及终端 | |
CN108924422B (zh) | 一种全景拍照方法及移动终端 | |
CN109120800A (zh) | 一种应用程序图标调整方法及移动终端 | |
CN111416935B (zh) | 一种拍摄方法及电子设备 | |
CN110908517B (zh) | 图像编辑方法、装置、电子设备及介质 | |
CN108174110B (zh) | 一种拍照方法及柔性屏终端 | |
CN110086998B (zh) | 一种拍摄方法及终端 | |
CN109559280B (zh) | 一种图像处理方法及终端 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19780822 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2019780822 Country of ref document: EP Effective date: 20201104 |