CN111201773A - Photographing method and device, mobile terminal and computer readable storage medium - Google Patents

Photographing method and device, mobile terminal and computer readable storage medium Download PDF

Info

Publication number
CN111201773A
CN111201773A CN201780095805.0A CN201780095805A CN111201773A CN 111201773 A CN111201773 A CN 111201773A CN 201780095805 A CN201780095805 A CN 201780095805A CN 111201773 A CN111201773 A CN 111201773A
Authority
CN
China
Prior art keywords
display
picture
content
screen
partial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201780095805.0A
Other languages
Chinese (zh)
Inventor
李金鑫
付洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Royole Technologies Co Ltd
Original Assignee
Shenzhen Royole Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Royole Technologies Co Ltd filed Critical Shenzhen Royole Technologies Co Ltd
Publication of CN111201773A publication Critical patent/CN111201773A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a shooting method and device, a mobile terminal and a computer readable storage medium. The method comprises the following steps: displaying a framing picture in a framing display screen of the mobile terminal in an initial state; when the selection operation is detected on the view finding picture, determining a target area selected by the selection operation; acquiring the content of a first partial picture corresponding to a target area in a view-finding picture; amplifying the content of the first partial picture; and displaying the contents of the framing picture and the magnified first partial picture on the framing display screen in a split screen mode in real time. The shooting method allows a photographer to select an interested local picture from a viewfinder picture to be amplified in the viewfinder process, and displays the viewfinder picture and the amplified local picture in parallel in real time, so that the photographer can synchronously view the overall preview effect of the viewfinder picture and the detailed information of the interested local picture in real time in the viewfinder process, a shooting decision can be made quickly, and the photographer has better use experience.

Description

Photographing method and device, mobile terminal and computer readable storage medium Technical Field
The present application relates to the field of image processing technologies, and in particular, to a shooting method and apparatus, a mobile terminal, and a computer-readable storage medium.
Background
When a mobile terminal is used for taking a picture, the situation that details of a certain place of interest are difficult to clearly view in a viewfinder picture displayed on a display screen. The photographer can only decide whether to take a shot from the entire situation of the through-view, and cannot determine whether the details somewhere of interest are perfect.
Disclosure of Invention
In view of this, the present application provides a shooting method and apparatus, a mobile terminal, and a computer-readable storage medium, which can synchronously display the overall preview effect of a viewfinder image and the detail information of a local image that a photographer is interested in real time during the framing process, so as to help the photographer to make a shooting decision quickly to improve shooting efficiency and enable the photographer to have better use experience.
One aspect of the present application provides a shooting method applied to a mobile terminal, where the mobile terminal includes a view finding display screen. The shooting method comprises the following steps:
displaying a framing picture in a framing display screen of the mobile terminal in an initial state;
when a selection operation is detected on the view finding picture, determining a target area selected by the selection operation;
acquiring the content of a first partial picture corresponding to the target area in the view-finding picture;
enlarging the content of the first partial picture;
and displaying the contents of the framing picture and the amplified first partial picture on the framing display screen in a split screen mode in real time.
Another aspect of the present application provides a camera device applied to a mobile terminal, where the mobile terminal includes a viewfinder display. The photographing apparatus includes:
the display module is used for displaying a framing picture in a framing display screen of the mobile terminal in an initial state;
the selection module is used for determining a target area selected by the selection operation when the selection operation is detected on the view finding picture;
an acquisition module, configured to acquire content of a first partial picture corresponding to the target area in the finder picture;
a zooming module for zooming in the content of the first partial picture;
the display module is further used for displaying the content of the framing picture and the amplified first partial picture on the framing display screen in a split screen mode in real time.
Yet another aspect of the present application provides a mobile terminal, which includes a processor, and the processor is configured to implement the steps of the shooting method according to any one of the above embodiments when executing a computer program stored in a memory.
Yet another aspect of the present application provides a computer-readable storage medium, on which computer instructions are stored, which when executed by a processor implement the steps of the photographing method according to any of the above embodiments.
The shooting method, the shooting device and the mobile terminal allow a photographer to select an interested local picture from a viewfinder picture in the viewfinder process, amplify the content of the selected local picture, and simultaneously display the viewfinder picture and the amplified local picture in parallel in real time, so that the photographer can synchronously view the whole preview effect of the viewfinder picture and the detailed information of the interested local picture in real time in the viewfinder process, and the photographer can be helped to make a shooting decision quickly to improve the shooting efficiency and have better use experience.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a photographing method according to a first embodiment of the present application.
Fig. 2 is a schematic view of a viewfinder display of the mobile terminal of the present application at a first time T1.
Fig. 3 is a schematic view of a screen after a selection operation is input in the viewfinder display of fig. 2.
Fig. 4 is a screen diagram illustrating the contents of the first partial screen selected by the selection operation in fig. 3.
Fig. 5 is a screen diagram illustrating the content of the viewfinder screen and the enlarged first partial screen displayed in a split manner on the viewfinder display screen of fig. 3.
Fig. 6 is another schematic view of the viewfinder screen and the enlarged first partial screen displayed on the viewfinder display screen of fig. 3 in a split-screen manner.
Fig. 7 is a flowchart of a photographing method according to a second embodiment of the present application.
Fig. 8 is a schematic view of the viewfinder display of fig. 5 at a second time T2.
Fig. 9 is a schematic view of a screen for enlarging the range of the target area shown in fig. 8.
Fig. 10 is a schematic diagram of a screen for changing the range of the target area shown in fig. 5.
Fig. 11 is a schematic view of a view finding display screen of fig. 8 displaying an image after the photographing operation is completed.
Fig. 12 is a flowchart of a photographing method according to a third embodiment of the present application.
Fig. 13A is a schematic view of a viewfinder display of the mobile terminal of the present application at a first time T1.
Fig. 13B is a schematic view of a screen of the sub display of the mobile terminal of the present application at the first time T1.
Fig. 14 is a schematic view of a screen after a selection operation is input in the viewfinder display of fig. 13A or the sub-display of fig. 13B.
Fig. 15 is a screen view showing the contents of the first partial screen selected by the selection operation in fig. 14.
Fig. 16A is a screen view schematically showing the content of the through screen and the enlarged first partial screen displayed in a split manner on the through screen of fig. 13A.
Fig. 16B is a screen view schematically illustrating the content of the enlarged first partial screen displayed on the sub-display in fig. 13B.
Fig. 17 is a flowchart of a photographing method according to a fourth embodiment of the present application.
Fig. 18A is a schematic view of the viewfinder display of fig. 13A at a second time T2.
Fig. 18B is a schematic view of the sub-display of fig. 13B at a second time T2.
Fig. 19A is a schematic view of the viewfinder display of fig. 13A displaying an image after the photographing operation is completed.
Fig. 19B is a schematic view of a picture displayed on the sub display panel of fig. 13B after the photographing operation is completed.
Fig. 20 is a functional block diagram of an imaging device according to an embodiment of the present application.
Fig. 21 is a functional module diagram of a mobile terminal according to a first embodiment of the present application.
Fig. 22 is a functional module diagram of a mobile terminal according to a second embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a flowchart of a shooting method according to a first embodiment of the present application is shown, where the shooting method is applied to a mobile terminal. The mobile terminal can be an electronic device with a shooting function, such as a camera, a smart phone and a tablet personal computer. In this embodiment, the mobile terminal includes at least a camera and a viewfinder display. The camera is used for collecting images, and the view finding display screen is used for displaying a view finding interface and the like.
It should be noted that the shooting method according to the embodiment of the present application is not limited to the steps and the sequence in the flowchart shown in fig. 1. Steps in the illustrated flowcharts may be added, removed, or changed in order according to various needs.
As shown in fig. 1, the photographing method includes the steps of:
step 101, displaying a framing picture in a framing display screen of the mobile terminal in an initial state.
It can be understood that the scene content within the focusing range of the camera of the mobile terminal is displayed in the framing picture. After the camera of the mobile terminal is started, the view-finding picture can be displayed in real time in the view-finding display screen, as shown in fig. 2, a real-time picture captured by the camera in a focusing range of the camera at a first time T1 is displayed in the view-finding display screen, so that a photographer can view the real-time picture on the mobile terminal.
In this embodiment, the viewfinder display of the mobile terminal displays the viewfinder screen in a full-screen manner in an initial state. It is understood that in other embodiments, the viewfinder display of the mobile terminal may not display the viewfinder in a full-screen manner in the initial state, for example, the viewfinder is displayed at a certain display screen ratio (e.g., 75%).
Step 102, when the selection operation is detected on the view finding picture, determining a target area selected by the selection operation.
It is understood that the selection operation may be a touch operation input by a touch object (e.g., a finger, a stylus pen, etc.) on the viewfinder display of the mobile terminal, or an operation input by a peripheral (e.g., a mouse, etc.) on the viewfinder display.
In one embodiment, the selection operation is to input an operation point in the viewing interface, for example, an operation point generated by clicking and touching the viewing interface or clicking the viewing interface with a mouse.
In one embodiment, the determining the target area selected by the selecting operation comprises:
taking an area surrounded by a circle with the operating point as the center and a first preset value as the radius as the target area; or
Taking the operation point as a center, and taking a region surrounded by a square with a second preset value as a side length as the target region, wherein four sides of the square are respectively parallel to each edge of the framing display screen; or
And taking the operation point as a center, a region surrounded by a rectangle with a third preset value as a length and a fourth preset value as a width as the target region, wherein four sides of the rectangle are respectively parallel to each edge of the view finding display screen.
In another embodiment, the selection operation is a sliding track input in the viewing interface, for example, a sliding track generated by sliding and touching the viewing interface or clicking and sliding in the viewing interface with a mouse.
In the another embodiment, the determining the target area selected by the selecting operation includes:
determining a starting point and an end point of the sliding track;
taking an area surrounded by a circle with the diameter of the connecting line of the starting point and the end point of the sliding track as the target area; or
Taking an area surrounded by a square taking a connecting line of a starting point and an end point of the sliding track as a diagonal line as the target area, wherein four sides of the square are respectively parallel to each edge of the framing display screen; or
And taking an area surrounded by a rectangle taking a connecting line of a starting point and an end point of the sliding track as a diagonal line as the target area, wherein four sides of the rectangle are respectively parallel to each edge of the viewfinder display screen.
Step 103, as shown in fig. 3, displaying an identification frame K on the viewfinder screen to identify the target area.
In this embodiment, the identification box K is shown as a dashed box.
Step 104, as shown in fig. 4, acquires the content of the first partial screen corresponding to the target area in the viewfinder screen.
Step 105, enlarging the content of the first partial screen.
And 106, displaying the contents of the viewfinder picture and the magnified first partial picture on the viewfinder display screen in a split screen mode in real time as shown in fig. 5.
In this embodiment, the step of displaying the content of the finder screen and the enlarged first partial screen on the finder display screen in real time in a split manner includes:
dividing a display area of the viewfinder display into a first display subarea R1 and a second display subarea R2 which are arranged in parallel;
the finder screen is displayed in real time in the first display sub-region R1, and the enlarged content of the first partial screen is displayed in real time in the second display sub-region R2.
In one embodiment, as shown in fig. 5, the first display sub-region R1 and the second display sub-region R2 are arranged in the display region of the viewfinder display in a longitudinally juxtaposed manner. It can be understood that such a split-screen display manner may correspond to a situation where the viewfinder display of the mobile terminal is in a vertical position.
Alternatively, in another embodiment, as shown in fig. 6, the first display sub-region R1 and the second display sub-region R2 are arranged in the display region of the viewfinder display screen in a laterally juxtaposed manner. It can be understood that such a split-screen display manner may correspond to a situation where the viewfinder display of the mobile terminal is placed horizontally.
In the present embodiment, the step of displaying the finder screen in real time on the first display sub-area R1 includes:
reducing the finder screen in accordance with the size of the first display sub-region R1;
the reduced finder screen is displayed in real time in the first display sub-area R1.
In this embodiment, the step of enlarging the content of the first partial screen includes:
the contents of the first partial screen are enlarged in accordance with the size of the second display sub-area R2.
The shooting method allows a photographer to select an interested local picture from a viewfinder picture in the viewfinder process, magnifies the content of the selected local picture, and simultaneously displays the viewfinder picture and the magnified local picture in parallel in real time, so that the photographer can synchronously view the overall preview effect of the viewfinder picture and the detailed information of the interested local picture in real time in the viewfinder process, and the photographer can be helped to make a shooting decision quickly to improve the shooting efficiency and have better use experience.
It can be understood that the photographer can view the detail information of the interested partial picture in real time through the enlarged first partial picture so as to judge whether the gesture, expression and the like of the concerned target object are in place, thereby adjusting the shooting parameters (such as shooting angle and the like) or the gesture and the like of the target object in real time or commanding the target object to perform self-adjustment so as to obtain a view-finding picture satisfying the photographer and then performing the photographing operation.
Please refer to fig. 7, which is a flowchart illustrating a photographing method according to a second embodiment of the present application. The second embodiment is mainly different from the first embodiment in that the second embodiment further includes a step of tracking the target object by using a target tracking technology and re-determining a new target area when detecting that the content of the finder screen changes. It should be noted that, within the scope of the spirit or the basic features of the present application, each specific solution applicable to the first embodiment may also be correspondingly applicable to the second embodiment, and for the sake of brevity and avoidance of repetition, the detailed description thereof is omitted here.
As shown in fig. 7, the photographing method includes the steps of:
step 201, displaying a framing picture in a framing display screen of the mobile terminal in an initial state.
Step 202, when a selection operation is detected on the viewfinder image, determining a target area selected by the selection operation.
Step 203, as shown in fig. 3, displaying an identification frame K on the viewfinder screen to identify the target area.
Step 204, as shown in fig. 4, acquires the content of the first partial screen corresponding to the target area in the viewfinder screen.
Step 205, enlarge the content of the first partial screen.
Step 206, as shown in fig. 5, the display area of the viewfinder display is divided into a first display sub-area R1 and a second display sub-area R2 which are arranged in parallel, the viewfinder image is displayed in the first display sub-area R1 in real time, and the content of the enlarged first partial image is displayed in the second display sub-area R2 in real time.
Step 207, extracting the features of the local picture content corresponding to the target area, and determining the target object according to the extracted features.
And step 208, tracking the target object by using a target tracking technology when detecting that the content of the framing picture changes.
Step 209, re-determining a new target area according to the current position of the target object in the viewfinder frame.
It is understood that the photographing method further includes: the identification frame K is displayed on the finder screen to identify the new target area (as shown in fig. 8).
Step 210, obtaining the content of the second partial picture corresponding to the new target area.
Step 211, enlarging the content of the second partial picture.
In step 212, the display screen of the second display sub-area R2 is updated in real time according to the enlarged content of the second partial screen.
In this embodiment, the step of enlarging the content of the second partial screen includes:
the contents of the second partial picture are enlarged in accordance with the size of the second display sub-area R2.
As shown in fig. 8, a real-time image captured by the camera in the focusing range of the camera at the second time T2 is displayed in the first display sub-area R1, and the content of the second partial image after being enlarged is displayed in the second display sub-area R2.
In this embodiment, the photographing method further includes:
adjusting the attribute of the identification frame K according to the input adjustment operation to adjust the range of the target area, wherein the attribute of the identification frame comprises the position, the size or the combination of the position and the size of the identification frame;
acquiring the content of a third partial picture corresponding to the adjusted target area;
enlarging the content of the third partial picture;
and updating the display picture of the second display sub-area R2 in real time according to the content of the enlarged third partial picture.
For example, as shown in fig. 9, the identification frame K shown in fig. 8 may be enlarged according to the input adjustment operation to expand the range of the target area. Similarly, the identification frame K shown in fig. 8 may also be reduced according to the input adjustment operation to reduce the range of the target area. It should be noted that the adjusting the size of the identification frame K only includes adjusting the size of the identification frame K alone, and does not include scaling the screen content of the area included in the identification frame K.
In addition, the identification frame K may be moved according to the input adjustment operation to reselect a local region of interest, for example, as shown in fig. 5, the object of interest in the target region is a woman, and a man may be selected as the object of interest after moving the identification frame K to the position shown in fig. 10.
It is understood that the position and size of the identification frame K can be adjusted according to the input adjustment operation, so as to reselect the local region of interest and adjust the range of the local region.
In this embodiment, the step of enlarging the content of the third partial screen includes:
the content of the third partial screen is enlarged in accordance with the size of the second display sub-area R2.
And step 213, executing a photographing task according to the input photographing operation and generating a corresponding image.
Step 214, as shown in fig. 11, displaying the image in full screen in the viewfinder display.
Optionally, in this embodiment, the shooting method further includes:
and when the display mode of the image is exited, displaying a new framing picture in the framing display screen of the mobile terminal again so as to execute the next shooting task.
The shooting method allows a photographer to select an interested local picture from a viewfinder picture in the viewfinder process, magnifies the content of the selected local picture, and simultaneously displays the viewfinder picture and the magnified local picture in parallel in real time, so that the photographer can synchronously view the whole preview effect of the viewfinder picture and the detailed information of the interested local picture in real time in the viewfinder process; on the other hand, the target area can be automatically adjusted according to the real-time change of the content of the framing picture, so that the content of the enlarged and displayed local picture is locked as the detail information of the local picture which is interested by the photographer, and the posture, the expression, the position and the like of the photographed object are adjusted, so that the photographer can be helped to make a photographing decision quickly to improve the photographing efficiency, and the photographer has better use experience.
Please refer to fig. 12, which is a flowchart illustrating a photographing method according to a third embodiment of the present application, wherein the photographing method is applied to a mobile terminal. In a third embodiment, the mobile terminal includes at least a camera, a viewfinder display, and a sub-display. The camera is used for collecting images, and the view finding display screen and the auxiliary display screen are used for displaying a view finding interface and the like. In a third embodiment, the viewfinder display is disposed on the front side of the mobile terminal, and the sub-display is disposed on the back side of the mobile terminal. It should be noted that the front surface is a surface of the mobile terminal facing a photographer when the mobile terminal is in use, and correspondingly, the back surface is a surface of the mobile terminal facing away from the photographer when the mobile terminal is in use. It can be understood that the sizes of the view finding display screen and the auxiliary display screen can be designed according to actual needs, and the size of the view finding display screen and the size of the auxiliary display screen are not limited in the application. It should be noted that the schematic diagrams of the present application, for example, fig. 13A, 13B, 16A, 16B, 18A, 18B, 19A, and 19B mentioned below, are only used for schematically illustrating the picture of the viewfinder display or the sub-display, and do not represent that the size of the viewfinder display is larger than, equal to, or smaller than the size of the sub-display.
The third embodiment is mainly different from the first embodiment in that the third embodiment further includes a step of displaying the contents of the enlarged partial screen on the sub-display. It should be noted that, within the scope of the spirit or the basic features of the present application, each specific solution applicable to the first embodiment may also be correspondingly applicable to the third embodiment, and for the sake of brevity and avoidance of repetition, the detailed description thereof is omitted here.
As shown in fig. 12, the photographing method includes the steps of:
step 301, displaying a framing picture in a framing display screen and a secondary display screen of the mobile terminal respectively in an initial state.
As shown in fig. 13A, displayed in the viewfinder display screen is a live view captured by the camera in the focusing range thereof at the first time T1 for the photographer to view on the mobile terminal. As shown in fig. 13B, displayed in the secondary display screen is a real-time picture captured by the camera in the focus range thereof at the first time T1 for the photographer to view on the mobile terminal.
In this embodiment, the viewfinder screen and the sub-screen of the mobile terminal both display the viewfinder screen in a full-screen manner in an initial state. It is understood that in other embodiments, the viewfinder screen and/or the sub-screen may not display the viewfinder in a full-screen manner in the initial state, for example, the viewfinder is displayed at a certain display screen ratio (e.g., 75%). It should be noted that "and/or" mentioned in the present application includes "and" as constituent conditions, and also includes "or" as constituent conditions, and for example, "a and/or B" includes A, B, A + B in three parallel.
Step 302, when a selection operation is detected on the viewfinder frame, determining a target area selected by the selection operation.
It is understood that the selection operation may be a touch operation input by a touch object (e.g., a finger, a stylus pen, etc.) on the viewfinder display and/or the sub-display of the mobile terminal, or an operation input by a peripheral (e.g., a mouse, etc.) on the viewfinder display and/or the sub-display.
Step 303, as shown in fig. 14, displaying an identification frame K on the viewfinder screen to identify the target area.
In step 304, as shown in fig. 15, the content of the first partial screen corresponding to the target area in the finder screen is acquired.
Step 305, enlarging the content of the first partial screen.
Step 306, as shown in fig. 16A, displaying the content of the viewfinder screen and the enlarged first partial screen on the viewfinder display screen in a split screen manner in real time, and, as shown in fig. 16B, displaying the content of the enlarged first partial screen on the sub-display screen in real time.
In this embodiment, the step of displaying the content of the finder screen and the enlarged first partial screen on the finder display screen in real time in a split manner includes:
dividing a display area of the viewfinder display into a first display subarea R1 and a second display subarea R2 which are arranged in parallel;
the finder screen is displayed in real time in the first display sub-region R1, and the enlarged content of the first partial screen is displayed in real time in the second display sub-region R2.
In this embodiment, the step of enlarging the content of the first partial screen includes:
the content of the first partial screen is enlarged in accordance with the size of the second display sub-area R2, and the content of the first partial screen is enlarged in accordance with the size of the display area of the sub-display.
Wherein the content of the first partial screen displayed in the second display sub-region R2 of the viewfinder display is enlarged in accordance with the size of the second display sub-region R2, and the content of the first partial screen displayed in the sub-display is enlarged in accordance with the size of the display region of the sub-display.
According to the shooting method, on one hand, a photographer is allowed to select an interested local picture from the view-finding pictures displayed by the view-finding display screen in the view-finding process, the content of the selected local picture is amplified, and the view-finding picture and the amplified local picture are displayed in parallel in real time, so that the photographer can synchronously check the whole preview effect of the view-finding picture and the detailed information of the interested local picture in real time in the view-finding process, and the shooting object is conveniently adjusted in posture, expression, position and the like, so that the photographer can be helped to make a shooting decision quickly to improve the shooting efficiency, and the photographer has better use experience; on the other hand, the mobile terminal with the double display screens can help the shot object to check whether the posture, the expression and the like of the shot object are in place or not, and adjust in real time so as to shoot a photo which is satisfactory for the shooting person/the shot object.
Please refer to fig. 17, which is a flowchart illustrating a photographing method according to a fourth embodiment of the present application. The main difference between the fourth embodiment and the third embodiment is that the fourth embodiment further includes a step of tracking the target object by using a target tracking technology and re-determining a new target area when detecting that the content of the finder screen changes. It should be noted that, within the scope of the spirit or the basic features of the present application, each specific solution applicable to the third embodiment may also be correspondingly applicable to the fourth embodiment, and for the sake of brevity and avoidance of repetition, detailed description thereof is omitted here.
As shown in fig. 17, the photographing method includes the steps of:
step 401, in an initial state, displaying a framing picture in a framing display screen and a secondary display screen of the mobile terminal respectively.
Step 402, when a selection operation is detected on the viewfinder frame, determining a target area selected by the selection operation.
In step 403, as shown in fig. 14, an identification frame K is displayed on the viewfinder screen to identify the target area.
In step 404, as shown in fig. 15, the content of the first partial screen corresponding to the target area in the finder screen is acquired.
Step 405, zoom in on the content of the first partial screen.
Step 406, as shown in fig. 16A and 16B, the display area of the viewfinder display is divided into a first display sub-area R1 and a second display sub-area R2 which are arranged in parallel, the viewfinder image is displayed in the first display sub-area R1 in real time, and the content of the enlarged first partial image is displayed in the second display sub-area R2 and the sub-display in real time.
In this embodiment, the step of enlarging the content of the first partial screen includes:
the content of the first partial screen is enlarged in accordance with the size of the second display sub-area R2, and the content of the first partial screen is enlarged in accordance with the size of the display area of the sub-display.
Wherein the content of the first partial screen displayed in the second display sub-region R2 of the viewfinder display is enlarged in accordance with the size of the second display sub-region R2, and the content of the first partial screen displayed in the sub-display is enlarged in accordance with the size of the display region of the sub-display.
Step 407, extracting the features of the local image content corresponding to the target area, and determining a target object according to the extracted features.
Step 408, when detecting that the content of the framing picture changes, tracking the target object by using a target tracking technology.
Step 409, re-determining a new target area according to the current position of the target object in the viewing picture.
Step 410, obtaining the content of the second partial picture corresponding to the new target area.
Step 411, enlarging the content of the second partial picture.
In step 412, the display screens of the second display sub-area R2 and the sub-display are updated in real time according to the enlarged content of the second partial screen.
In this embodiment, the step of enlarging the content of the second partial screen includes:
the content of the second partial picture is enlarged in accordance with the size of the second display sub-area R2, and the content of the second partial picture is enlarged in accordance with the size of the display area of the sub-display.
Wherein the content of the second partial screen displayed in the second display sub-area R2 of the viewfinder display is enlarged in accordance with the size of the second display sub-area R2, and the content of the second partial screen displayed in the sub-display is enlarged in accordance with the size of the display area of the sub-display.
As shown in fig. 18A, a real-time picture captured by the camera in the focusing range thereof at the second time T2 is displayed in the first display sub-region R1, and the content of the second partial picture after being enlarged is displayed in the second display sub-region R2. As shown in fig. 18B, the content of the second partial screen after enlargement is displayed in the sub display screen.
In this embodiment, the photographing method further includes:
adjusting the attribute of the identification frame K according to the input adjustment operation to adjust the range of the target area, wherein the attribute of the identification frame comprises the position, the size or the combination of the position and the size of the identification frame;
acquiring the content of a third partial picture corresponding to the adjusted target area;
enlarging the content of the third partial picture;
and updating the display pictures of the second display subarea R2 and the secondary display screen in real time according to the content of the enlarged third partial picture.
For example, the identification frame K may be enlarged according to the input adjustment operation to expand the range of the target area. For another example, the identification box K may be narrowed according to the input adjustment operation to narrow the scope of the target region. It should be noted that the adjusting the size of the identification frame K only includes adjusting the size of the identification frame K alone, and does not include scaling the screen content of the area included in the identification frame K. In addition, the identification frame K can be moved according to the input adjustment operation, so as to reselect the local region of interest. Alternatively, the position and size of the identification frame K may be adjusted according to the input adjustment operation, so as to reselect the local region of interest and adjust the range of the local region.
In this embodiment, the step of enlarging the content of the third partial screen includes:
the content of the third partial screen is enlarged in accordance with the size of the second display sub-area R2, and the content of the third partial screen is enlarged in accordance with the size of the display area of the sub-display.
Wherein the content of the third partial screen displayed in the second display sub-area R2 of the viewfinder display is enlarged in accordance with the size of the second display sub-area R2, and the content of the third partial screen displayed in the sub-display is enlarged in accordance with the size of the display area of the sub-display.
And step 413, executing a photographing task according to the input photographing operation, and generating a corresponding image.
Step 414, as shown in fig. 19A and 19B, displaying the image in the viewfinder display screen and the sub display screen in full screen.
Optionally, in other embodiments, the image may not be displayed in the secondary display screen.
Optionally, in this embodiment, the shooting method further includes:
and when the display mode of the image is exited, displaying a new framing picture in the framing display screen and the auxiliary display screen of the mobile terminal again so as to execute the next shooting task.
The shooting method allows a photographer and/or a shot object to select an interested local picture from a view picture in the view process, magnifies the content of the selected local picture, and simultaneously displays the view picture and the magnified local picture in parallel in real time, so that the photographer can synchronously view the overall preview effect of the view picture and the detailed information of the interested local picture in real time in the view process; on the other hand, the target area can be automatically adjusted according to the real-time change of the content of the framing picture, so that the content of the enlarged and displayed local picture is locked to the detail information of the local picture which is interested by the photographer and/or the photographed object, and the posture, the expression, the position and the like of the photographed object can be adjusted, so that the photographer can be helped to quickly make a photographing decision to improve the photographing efficiency, the photographed object can be helped to check whether the posture, the expression and the like of the photographed object are in place or not on a mobile terminal with double display screens, and self-adjustment can be carried out in real time, and the photographer and/or the photographed object have better use experience.
Referring to fig. 20, a schematic structural diagram of a camera 10 according to an embodiment of the present application is shown, where the camera 10 is applied to a mobile terminal. The mobile terminal can be an electronic device with a shooting function, such as a camera, a smart phone and a tablet personal computer. In this embodiment, the mobile terminal includes at least a camera and a viewfinder display. The camera is used for collecting images, and the view finding display screen is used for displaying a view finding interface and the like.
The camera 10 may include one or more modules stored in a memory of the mobile terminal and configured to be executed by one or more processors (one processor in this embodiment) to complete the present application. For example, referring to fig. 20, the photographing apparatus 10 may include a display module 111, a selection module 112, an acquisition module 113, a zoom module 114, and a split screen module 115. The modules referred to in the embodiments of the present application may be program segments for performing a specific function, and are more suitable than programs for describing the execution process of software in a processor. It is understood that, corresponding to the embodiments of the above-described photographing method, the photographing apparatus 10 may include some or all of the functional modules shown in fig. 20, and the functions of the modules will be described in detail below.
In this embodiment, the display module 111 is configured to display a viewfinder screen on a viewfinder display of the mobile terminal in an initial state.
It can be understood that the scene content within the focusing range of the camera of the mobile terminal is displayed in the framing picture. After the camera of the mobile terminal is started, the view-finding picture can be displayed in real time in the view-finding display screen, as shown in fig. 2, a real-time picture captured by the camera in a focusing range of the camera at a first time T1 is displayed in the view-finding display screen, so that a photographer can view the real-time picture on the mobile terminal.
In this embodiment, the viewfinder display of the mobile terminal displays the viewfinder screen in a full-screen manner in an initial state. It is understood that in other embodiments, the viewfinder display of the mobile terminal may not display the viewfinder in a full-screen manner in the initial state, for example, the viewfinder is displayed at a certain display screen ratio (e.g., 75%).
In another embodiment, the mobile terminal further comprises a secondary display screen. The view finding display screen is arranged on the front face of the mobile terminal, and the auxiliary display screen is arranged on the back face of the mobile terminal. It should be noted that the front surface is a surface of the mobile terminal facing a photographer when the mobile terminal is in use, and correspondingly, the back surface is a surface of the mobile terminal facing away from the photographer when the mobile terminal is in use.
In the another embodiment, the display module 111 is configured to display a viewfinder screen in the viewfinder display and the sub-display of the mobile terminal in an initial state.
As shown in fig. 13A, displayed in the viewfinder display screen is a live view captured by the camera in the focusing range thereof at the first time T1 for the photographer to view on the mobile terminal. As shown in fig. 13B, displayed in the secondary display screen is a real-time picture captured by the camera in the focus range thereof at the first time T1 for the photographer to view on the mobile terminal.
In the other embodiment, the viewfinder display and the sub-display of the mobile terminal both display the viewfinder in a full-screen manner in an initial state. It is understood that in other embodiments, the viewfinder screen and/or the sub-screen may not display the viewfinder in a full-screen manner in the initial state, for example, the viewfinder is displayed at a certain display screen ratio (e.g., 75%).
The selecting module 112 is configured to determine a target area selected by a selecting operation when the selecting operation is detected on the viewfinder.
It is understood that the selection operation may be a touch operation input by a touch object (e.g., a finger, a stylus pen, etc.) on the viewfinder display of the mobile terminal, or an operation input by a peripheral (e.g., a mouse, etc.) on the viewfinder display.
In other embodiments, the selection operation may also be a touch operation input by a touch object (e.g., a finger, a stylus pen, etc.) on a secondary display screen of the mobile terminal, or an operation input by a peripheral (e.g., a mouse, etc.) on the secondary display screen.
In one embodiment, the selection operation is to input an operation point in the viewing interface, for example, an operation point generated by clicking and touching the viewing interface or clicking the viewing interface with a mouse.
In the above embodiment, the selecting module 112, when determining the target area selected by the selecting operation, is specifically configured to:
taking an area surrounded by a circle with the operating point as the center and a first preset value as the radius as the target area; or
Taking the operation point as a center, and taking a region surrounded by a square with a second preset value as a side length as the target region, wherein four sides of the square are respectively parallel to each edge of the framing display screen; or
And taking the operation point as a center, a region surrounded by a rectangle with a third preset value as a length and a fourth preset value as a width as the target region, wherein four sides of the rectangle are respectively parallel to each edge of the view finding display screen.
In another embodiment, the selection operation is a sliding track input in the viewing interface, for example, a sliding track generated by sliding and touching the viewing interface or clicking and sliding in the viewing interface with a mouse.
In another embodiment, when determining the target area selected by the selecting operation, the selecting module 112 is specifically configured to:
determining a starting point and an end point of the sliding track;
taking an area surrounded by a circle with the diameter of the connecting line of the starting point and the end point of the sliding track as the target area; or
Taking an area surrounded by a square taking a connecting line of a starting point and an end point of the sliding track as a diagonal line as the target area, wherein four sides of the square are respectively parallel to each edge of the framing display screen; or
And taking an area surrounded by a rectangle taking a connecting line of a starting point and an end point of the sliding track as a diagonal line as the target area, wherein four sides of the rectangle are respectively parallel to each edge of the viewfinder display screen.
In this embodiment, the display module 111 is further configured to display an identification frame K (shown in fig. 3 or 14) on the viewfinder screen to identify the target area. In this embodiment, the identification box K is shown as a dashed box.
The obtaining module 113 is configured to obtain content of a first partial screen corresponding to the target area in the viewfinder screen (as shown in fig. 4 or 15). The scaling module 114 is configured to enlarge the content of the first partial screen. The display module 111 is further configured to split the content of the viewfinder screen and the enlarged first partial screen in real time on the viewfinder display screen (as shown in fig. 5 or 16A).
In this embodiment, the split screen module 115 is configured to divide the display area of the viewfinder display into a first display sub-area R1 and a second display sub-area R2 which are arranged in parallel. The display module 111 is specifically configured to display the viewfinder image in the first display sub-area R1 in real time, and display the content of the enlarged first partial image in the second display sub-area R2 in real time.
In one embodiment, as shown in fig. 5, the first display sub-region R1 and the second display sub-region R2 are arranged in the display region of the viewfinder display in a longitudinally juxtaposed manner. It can be understood that such a split-screen display manner may correspond to a situation where the viewfinder display of the mobile terminal is in a vertical position.
Alternatively, in another embodiment, as shown in fig. 6, the first display sub-region R1 and the second display sub-region R2 are arranged in the display region of the viewfinder display screen in a laterally juxtaposed manner. It can be understood that such a split-screen display manner may correspond to a situation where the viewfinder display of the mobile terminal is placed horizontally.
In this embodiment, the zooming module 114 is further configured to zoom out the viewfinder screen according to the size of the first display sub-region R1, and zoom in the content of the first partial screen according to the size of the second display sub-region R2. The display module 111 is specifically configured to display the reduced finder screen on the first display sub-region R1 in real time.
In another embodiment, the display module 111 is further configured to display the content of the viewfinder screen and the enlarged first partial screen on the viewfinder display screen in a split-screen manner in real time (as shown in fig. 16A), and display the content of the enlarged first partial screen on the sub-display screen in real time (as shown in fig. 16B).
Accordingly, in the another embodiment, the zooming module 114 is specifically configured to enlarge the content of the first partial screen according to the size of the second display sub-area R2, and enlarge the content of the first partial screen according to the size of the display area of the secondary display screen.
Wherein the content of the first partial screen displayed in the second display sub-region R2 of the viewfinder display is enlarged in accordance with the size of the second display sub-region R2, and the content of the first partial screen displayed in the sub-display is enlarged in accordance with the size of the display region of the sub-display.
The shooting device allows a photographer and/or a shot object to select an interested local picture from a view picture in the view process, amplifies the content of the selected local picture, and simultaneously displays the view picture and the amplified local picture in parallel in real time, so that the photographer can synchronously view the whole preview effect of the view picture and the detailed information of the interested local picture in real time in the view process, and the photographer can be helped to make a shooting decision quickly to improve the shooting efficiency and have better use experience; on the other hand, the mobile terminal with the double display screens can help the shot object to check whether the posture, the expression and the like of the shot object are in place or not, and adjust in real time so as to shoot a photo which is satisfactory for the shooting person/the shot object.
It can be understood that the photographer can view the detail information of the interested partial picture in real time through the enlarged first partial picture so as to judge whether the gesture, expression and the like of the concerned target object are in place, thereby adjusting the shooting parameters (such as shooting angle and the like) or the gesture and the like of the target object in real time or commanding the target object to perform self-adjustment so as to obtain a view-finding picture satisfying the photographer and then performing the photographing operation.
Referring to fig. 21 again, in the present embodiment, the photographing apparatus 10 further includes a feature analysis module 116, a detection module 117, and a tracking module 118, wherein the feature analysis module 116 is configured to extract features of local image content corresponding to the target area, and determine a target object according to the extracted features. The detecting module 117 is configured to detect whether content of the framing picture changes. The tracking module 118 is configured to track the target object by using a target tracking technology when detecting that the content of the viewfinder frame changes.
In this embodiment, the selecting module 112 is further configured to determine a new target area according to the current position of the target object in the viewfinder frame.
It is understood that the display module 111 is further configured to display the identification frame K on the viewfinder screen to identify the new target area (as shown in fig. 8).
The obtaining module 113 is further configured to obtain content of a second partial screen corresponding to the new target area. The scaling module 114 is further configured to enlarge the content of the second partial picture. The display module 111 is further configured to update the display screen of the second display sub-area R2 in real time according to the content of the enlarged second partial screen.
In this embodiment, the zooming module 114 is specifically configured to zoom in the content of the second partial screen according to the size of the second display sub-area R2.
As shown in fig. 8, a real-time image captured by the camera in the focusing range of the camera at the second time T2 is displayed in the first display sub-area R1, and the content of the second partial image after being enlarged is displayed in the second display sub-area R2.
In another embodiment, the display module 111 is further configured to update the display screens of the second display sub-region R2 and the secondary display screen in real time according to the content of the enlarged second partial screen.
Accordingly, in the another embodiment, the zooming module 114 is specifically configured to enlarge the content of the second partial screen according to the size of the second display sub-area R2, and enlarge the content of the second partial screen according to the size of the display area of the secondary display screen.
Wherein the content of the second partial screen displayed in the second display sub-area R2 of the viewfinder display is enlarged in accordance with the size of the second display sub-area R2, and the content of the second partial screen displayed in the sub-display is enlarged in accordance with the size of the display area of the sub-display.
As shown in fig. 18A, a real-time picture captured by the camera in the focusing range thereof at the second time T2 is displayed in the first display sub-region R1, and the content of the second partial picture after being enlarged is displayed in the second display sub-region R2. As shown in fig. 18B, the content of the second partial screen after enlargement is displayed in the sub display screen.
Referring to fig. 20 again, in the present embodiment, the photographing apparatus 10 further includes an adjusting module 119, and the adjusting module 119 is configured to adjust an attribute of the mark frame K according to an input adjusting operation to adjust the range of the target area, where the attribute of the mark frame includes a position, a size, or a combination of the position and the size of the mark frame.
In this embodiment, the obtaining module 113 is further configured to obtain the content of the third partial screen corresponding to the adjusted target area. The scaling module 114 is further configured to enlarge the content of the third local screen. The display module 111 is further configured to update the display screen of the second display sub-area R2 in real time according to the content of the enlarged third partial screen.
For example, as shown in fig. 9, the adjusting module 119 may enlarge the identification frame K shown in fig. 8 according to the input adjusting operation to expand the range of the target area. Similarly, the adjusting module 119 may also narrow the identification box K shown in fig. 8 according to the input adjusting operation, so as to narrow the range of the target area. It should be noted that the adjusting the size of the identification frame K only includes adjusting the size of the identification frame K alone, and does not include scaling the screen content of the area included in the identification frame K.
In addition, the adjusting module 119 may further move the identification frame K according to the input adjustment operation to reselect a local region of interest, for example, as shown in fig. 5, the object of interest in the target region is a woman, and after moving the identification frame K to the position shown in fig. 10, a man may be selected as the object of interest.
It is understood that the adjusting module 119 can also adjust the position and size of the identification frame K according to the input adjusting operation, so as to reselect the local region of interest and adjust the range of the local region.
In this embodiment, the zooming module 114 is specifically configured to zoom in the content of the third local screen according to the size of the second display sub-area R2.
In another embodiment, the display module 111 is further configured to update the display screens of the second display sub-region R2 and the secondary display screen in real time according to the content of the enlarged third partial screen.
Accordingly, in the another embodiment, the zooming module 114 is specifically configured to enlarge the content of the third local screen according to the size of the second display sub-region R2, and enlarge the content of the third local screen according to the size of the display region of the secondary display screen.
Wherein the content of the third partial screen displayed in the second display sub-area R2 of the viewfinder display is enlarged in accordance with the size of the second display sub-area R2, and the content of the third partial screen displayed in the sub-display is enlarged in accordance with the size of the display area of the sub-display.
Referring to fig. 20 again, in the present embodiment, the photographing apparatus 10 further includes a photographing module 120, and the photographing module 120 is configured to perform a photographing task according to an input photographing operation and generate a corresponding image. The display module 111 is further configured to display the image in full screen in the viewfinder display.
In another embodiment, the display module 111 is further configured to display the image in full screen in the viewfinder display and the secondary display.
Optionally, the display module 111 is further configured to, when exiting the image display mode, re-display a new viewfinder picture in the viewfinder display and the sub-display of the mobile terminal, so as to perform a next shooting task.
The shooting device allows a photographer and/or a shot object to select an interested local picture from a view picture in the view process, magnifies the content of the selected local picture, and simultaneously displays the view picture and the magnified local picture in parallel in real time, so that the photographer can synchronously view the overall preview effect of the view picture and the detailed information of the interested local picture in real time in the view process; on the other hand, the target area can be automatically adjusted according to the real-time change of the content of the framing picture, so that the content of the enlarged and displayed local picture is locked to the detail information of the local picture which is interested by the photographer and/or the photographed object, and the posture, the expression, the position and the like of the photographed object can be adjusted, so that the photographer can be helped to quickly make a photographing decision to improve the photographing efficiency, the photographed object can be helped to check whether the posture, the expression and the like of the photographed object are in place or not on a mobile terminal with double display screens, and self-adjustment can be carried out in real time, and the photographer and/or the photographed object have better use experience.
The embodiment of the present application further provides a mobile terminal, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the program, the steps of the shooting method in the foregoing embodiment are implemented.
Fig. 21 is a schematic structural diagram of a mobile terminal 100 according to a first embodiment of the present application. As shown in fig. 21, the mobile terminal 100 includes at least a processor 20, a memory 30, a computer program 40 (e.g., a photographing program) stored in the memory 30 and operable on the processor 20, a camera 52, and a viewfinder display 53.
The mobile terminal 100 may be an electronic device with a 3D shooting function, such as a camera, a smart phone, and a tablet computer. Those skilled in the art will appreciate that the diagram 21 is merely an example of the mobile terminal 100 for implementing the shooting method, and does not constitute a limitation of the mobile terminal 100, and may include more or less components than those shown, or combine some components, or different components, for example, the mobile terminal 100 may further include an input/output device, a network access device, a wireless transmission device, and the like.
It is understood that when the mobile terminal 100 is started, the camera 52 of the mobile terminal 100 may be directed to an object to be photographed in a photographing scene, and the camera 52 may pick up the content of the photographing scene in real time.
In the present embodiment, the viewfinder display 53 is used to display the contents of the viewfinder interface and the enlarged first, second, and third partial screens. It is understood that the viewfinder frame shows the scene content within the focus range of the camera 52 of the mobile terminal 100. When the camera 52 of the mobile terminal 100 is activated, the viewfinder screen 53 can display the viewfinder in real time.
In another embodiment, as shown in fig. 22, the mobile terminal 100' may further include a sub-display 54, and the sub-display 54 may also be used to display the content of the viewfinder interface and the enlarged first, second, and third partial images.
The viewfinder display 53 is disposed on the front surface of the mobile terminal 100, and the sub-display 54 is disposed on the back surface of the mobile terminal 100. It should be noted that the front surface is a surface of the mobile terminal 100 facing a photographer when the mobile terminal is in use, and correspondingly, the back surface is a surface of the mobile terminal 100 facing away from the photographer when the mobile terminal is in use. It is understood that the purpose of providing the sub-display 54 on the mobile terminal 100 and displaying the view interface and the contents of the enlarged first, second, and third partial pictures on the sub-display 54 is to allow the subject to see whether the posture, expression, and the like of the subject are in place or not through the picture displayed on the sub-display 54 and to adjust in real time when the subject included in the partial picture of interest is a person and the distance between the subject and the camera 52 of the mobile terminal 100 is within a predetermined range, for example, 1 m, so as to take a picture satisfying the photographer/subject.
It can be understood that the sizes of the viewfinder display 53 and the sub display 54 can be designed according to actual needs, and the sizes of the viewfinder display 53 and the sub display 54 are not limited in this application. It should be noted that the schematic diagrams in the present application, for example, fig. 13A, 13B, 16A, 16B, 18A, 18B, 19A, and 19B are only used for schematically illustrating the picture of the viewfinder display 53 or the sub-display 54, and do not represent that the size of the viewfinder display 53 is larger than, equal to, or smaller than the size of the sub-display 54.
In one embodiment, the viewfinder display 53 and/or the sub-display 54 of the mobile terminal 100 are touch displays, and a photographer/subject can directly perform a touch operation on the viewfinder display 53 and/or the sub-display 54 to select a local screen of interest from the viewfinder screens, or adjust the position or size of an area of the local screen of interest.
In another embodiment, the viewfinder display 53 and/or the sub-display 54 are non-touch displays, and the mobile terminal 100 further includes a peripheral device (e.g., a mouse, etc.), through which a photographer/subject can input an operation on the viewfinder display 53 and/or the sub-display 54 to select a local screen of interest from the above viewfinder screens or adjust a position or a size of an area of the local screen of interest.
The processor 20 executes the computer program 40 to implement the steps of the above-mentioned various embodiments of the shooting method, such as steps 101 to 106 shown in fig. 1, or steps 201 to 214 shown in fig. 7, or steps 301 to 306 shown in fig. 12, or steps 401 to 414 shown in fig. 17. Alternatively, the processor 20 implements the functions of the modules/units, such as the modules 11 to 17, in the embodiment of the photographing apparatus 10 when executing the computer program 40.
Illustratively, the computer program 40 may be partitioned into one or more modules/units that are stored in the memory 30 and executed by the processor 20 to accomplish the present application. The one or more modules/units may be a series of computer program 40 instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 40 in the mobile terminal 100. For example, the computer program 40 can be divided into the display module 111, the selection module 112, the obtaining module 113, the scaling module 114, the split-screen module 115, the feature analysis module 116, the detection module 117, the tracking module 118, the adjustment module 119, and the shooting module 120 in fig. 20, and the specific functions of each of the modules 111 to 120 are described in detail in the foregoing, which is omitted for brevity and to avoid repetition.
The Processor 20 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 20 is a control center of the mobile terminal 100 and connects the entire camera 10/various parts of the mobile terminal 100 using various interfaces and lines.
The memory 30 may be used for storing the computer program 40 and/or the modules/units, and the processor 20 implements various functions of the photographing apparatus 10/the mobile terminal 100 by running or executing the computer program 40 and/or the modules/units stored in the memory 30 and calling data stored in the memory 30. The memory 30 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (e.g., a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (e.g., audio data, data set, acquired by applying the above-described photographing method, etc.) created according to the use of the mobile terminal 100, and the like. In addition, the memory 30 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The present application also provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the photographing method described in the above embodiments.
The module/unit integrated with the photographing device 10/mobile terminal 100/computer device of the present application may be stored in a computer-readable storage medium if it is implemented in the form of a software function unit and sold or used as a separate product. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
Although the present application has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the present application.

Claims (20)

  1. A shooting method is applied to a mobile terminal, the mobile terminal comprises a view finding display screen, and the shooting method is characterized by comprising the following steps:
    displaying a framing picture in a framing display screen of the mobile terminal in an initial state;
    when a selection operation is detected on the view finding picture, determining a target area selected by the selection operation;
    acquiring the content of a first partial picture corresponding to the target area in the view-finding picture;
    enlarging the content of the first partial picture;
    and displaying the contents of the framing picture and the amplified first partial picture on the framing display screen in a split screen mode in real time.
  2. The photographing method according to claim 1, wherein the step of split-screen displaying the contents of the finder screen and the enlarged first partial screen on the finder display screen in real time includes:
    dividing a display area of the view finding display screen into a first display sub-area and a second display sub-area which are arranged in parallel;
    and displaying the view-finding picture on the first display sub-area in real time, and displaying the content of the enlarged first partial picture on the second display sub-area in real time.
  3. The photographing method according to claim 2, wherein the first display sub-area and the second display sub-area are arranged in parallel in a display area of the finder display screen in a lateral or vertical direction.
  4. The photographing method according to claim 2, wherein the step of displaying the finder screen in real time on the first display sub-area includes:
    reducing the view-finding picture according to the size of the first display subarea;
    and displaying the zoomed view-finding picture on the first display subarea in real time.
  5. The photographing method of claim 2, wherein the step of enlarging the contents of the first partial screen comprises:
    and enlarging the content of the first partial picture according to the size of the second display sub-area.
  6. The photographing method according to claim 2, wherein the photographing method further comprises:
    extracting the characteristics of the local picture content corresponding to the target area, and determining a target object according to the extracted characteristics;
    when detecting that the content of the framing picture changes, tracking the target object by using a target tracking technology;
    re-determining a new target area according to the current position of the target object in the framing picture;
    acquiring the content of a second partial picture corresponding to the new target area;
    amplifying the content of the second partial picture;
    and updating the display picture of the second display sub-area in real time according to the content of the enlarged second partial picture.
  7. The photographing method according to claim 6, wherein the photographing method further comprises:
    and displaying an identification frame on the viewfinder to identify the target area.
  8. The photographing method according to claim 7, wherein the photographing method further comprises:
    adjusting the attribute of the identification frame according to the input adjustment operation to adjust the range of the target area, wherein the attribute of the identification frame comprises the position, the size or the combination of the position and the size of the identification frame;
    acquiring the content of a third partial picture corresponding to the adjusted target area;
    enlarging the content of the third partial picture;
    and updating the display picture of the second display sub-area in real time according to the content of the enlarged third partial picture.
  9. The photographing method of claim 6, wherein the mobile terminal further includes a sub display screen, the photographing method further comprising:
    displaying the framing picture in the secondary display screen in an initial state; and
    displaying the enlarged content of the first partial screen in real time on the sub-display after enlarging the content of the first partial screen.
  10. The photographing method according to claim 9, after acquiring the content of the second partial picture corresponding to the new target area and enlarging the content of the second partial picture, further comprising:
    and updating the display picture of the secondary display screen in real time according to the content of the enlarged second partial picture.
  11. The photographing method according to claim 8, wherein the mobile terminal further includes a sub display for displaying the finder screen in an initial state;
    after acquiring the content of the third partial picture corresponding to the adjusted target area and amplifying the content of the third partial picture, the method further comprises the following steps:
    and updating the display picture of the secondary display screen in real time according to the content of the enlarged third partial picture.
  12. The photographing method according to claim 1, 2 or 6, wherein the photographing method further comprises:
    executing a photographing task according to the input photographing operation and generating a corresponding image;
    displaying the image in full screen in the viewfinder display screen.
  13. A shooting device is applied to a mobile terminal, the mobile terminal comprises a view finding display screen, and the shooting device is characterized by comprising:
    the display module is used for displaying a framing picture in a framing display screen of the mobile terminal in an initial state;
    the selection module is used for determining a target area selected by the selection operation when the selection operation is detected on the view finding picture;
    an acquisition module, configured to acquire content of a first partial picture corresponding to the target area in the finder picture;
    a zooming module for zooming in the content of the first partial picture;
    the display module is further used for displaying the content of the framing picture and the amplified first partial picture on the framing display screen in a split screen mode in real time.
  14. The camera of claim 13, further comprising a split screen module for dividing a display area of the viewfinder display into a first display sub-area and a second display sub-area arranged in parallel;
    the display module is specifically configured to display the view-finding picture in the first display sub-area in real time, and display the content of the enlarged first partial picture in the second display sub-area in real time.
  15. The camera of claim 14, wherein the zoom module is further configured to zoom out the viewfinder frame according to a size of the first display sub-region, and to zoom in content of the first partial frame according to a size of the second display sub-region;
    the display module is specifically configured to display the zoomed-out view-finding picture in the first display sub-area in real time.
  16. The camera of claim 14, wherein said camera further comprises:
    the characteristic analysis module is used for extracting the characteristics of the local picture content corresponding to the target area and determining a target object according to the extracted characteristics;
    the detection module is used for detecting whether the content of the framing picture changes;
    the tracking module is used for tracking the target object by utilizing a target tracking technology when detecting that the content of the framing picture changes;
    the selecting module is further used for re-determining a new target area according to the current position of the target object in the framing picture;
    the acquisition module is further used for acquiring the content of a second partial picture corresponding to the new target area;
    the zooming module is also used for zooming in the content of the second partial picture;
    the display module is further configured to update the display image of the second display sub-area in real time according to the content of the enlarged second partial image.
  17. The camera of claim 16, wherein the display module is further configured to display an identification box on the viewfinder frame to identify the target area;
    the shooting device further comprises an adjusting module, wherein the adjusting module is used for adjusting the attribute of the identification frame according to the input adjusting operation so as to adjust the range of the target area, and the attribute of the identification frame comprises the position, the size or the combination of the position and the size of the identification frame;
    the acquisition module is further used for acquiring the content of a third local picture corresponding to the adjusted target area;
    the zooming module is also used for zooming the content of the third local picture;
    the display module is further configured to update the display image of the second display sub-area in real time according to the content of the enlarged third partial image.
  18. The camera of claim 17, wherein the mobile terminal further comprises a secondary display screen, and wherein the display module is further configured to:
    displaying the framing picture in the secondary display screen in an initial state;
    displaying the enlarged content of the first partial screen in real time on the sub display screen after enlarging the content of the first partial screen; or
    After the content of the second partial picture is amplified, updating the display picture of the secondary display screen in real time according to the amplified content of the second partial picture; or
    And after the content of the third partial picture is enlarged, updating the display picture of the secondary display screen in real time according to the enlarged content of the third partial picture.
  19. A mobile terminal, characterized in that it comprises a processor for implementing the steps of the photographing method according to any one of claims 1-12 when executing a computer program stored in a memory.
  20. A computer-readable storage medium, on which computer instructions are stored, which, when executed by a processor, carry out the steps of the photographing method according to any one of claims 1-12.
CN201780095805.0A 2017-11-10 2017-11-10 Photographing method and device, mobile terminal and computer readable storage medium Pending CN111201773A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/110563 WO2019090734A1 (en) 2017-11-10 2017-11-10 Photographing method and apparatus, mobile terminal, and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111201773A true CN111201773A (en) 2020-05-26

Family

ID=66437418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780095805.0A Pending CN111201773A (en) 2017-11-10 2017-11-10 Photographing method and device, mobile terminal and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN111201773A (en)
WO (1) WO2019090734A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866578B (en) * 2021-02-03 2023-04-07 四川新视创伟超高清科技有限公司 Global-to-local bidirectional visualization and target tracking system and method based on 8K video picture
CN113741022A (en) * 2021-07-22 2021-12-03 武汉高德智感科技有限公司 Method and device for displaying infrared image in picture and display equipment
CN114265538A (en) * 2021-12-21 2022-04-01 Oppo广东移动通信有限公司 Photographing control method and device, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101605207A (en) * 2009-04-15 2009-12-16 明基电通有限公司 A kind of digital camera method of operation and the digital camera that uses the method
CN101656826A (en) * 2008-08-21 2010-02-24 鸿富锦精密工业(深圳)有限公司 Video recording system and video recording method thereof
US20130218464A1 (en) * 2012-02-17 2013-08-22 Chun-Ming Chen Method for generating split screen according to a touch gesture
CN105491220A (en) * 2014-10-01 2016-04-13 Lg电子株式会社 Mobile terminal and control method thereof
CN105824492A (en) * 2015-09-30 2016-08-03 维沃移动通信有限公司 Display control method and terminal

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102033712A (en) * 2010-12-25 2011-04-27 鸿富锦精密工业(深圳)有限公司 Electronic reading device with split display function and display method thereof
US9055162B2 (en) * 2011-02-15 2015-06-09 Lg Electronics Inc. Method of transmitting and receiving data, display device and mobile terminal using the same
KR20130061993A (en) * 2011-12-02 2013-06-12 (주) 지.티 텔레콤 The operating method of touch screen
KR102031142B1 (en) * 2013-07-12 2019-10-11 삼성전자주식회사 Electronic device and method for controlling image display
KR101638963B1 (en) * 2014-08-14 2016-07-22 삼성전자주식회사 User terminal apparatus and control method thereof
CN105898134A (en) * 2015-11-15 2016-08-24 乐视移动智能信息技术(北京)有限公司 Image acquisition method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101656826A (en) * 2008-08-21 2010-02-24 鸿富锦精密工业(深圳)有限公司 Video recording system and video recording method thereof
CN101605207A (en) * 2009-04-15 2009-12-16 明基电通有限公司 A kind of digital camera method of operation and the digital camera that uses the method
US20130218464A1 (en) * 2012-02-17 2013-08-22 Chun-Ming Chen Method for generating split screen according to a touch gesture
CN105491220A (en) * 2014-10-01 2016-04-13 Lg电子株式会社 Mobile terminal and control method thereof
CN105824492A (en) * 2015-09-30 2016-08-03 维沃移动通信有限公司 Display control method and terminal

Also Published As

Publication number Publication date
WO2019090734A1 (en) 2019-05-16

Similar Documents

Publication Publication Date Title
EP3457683B1 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
KR102480245B1 (en) Automated generation of panning shots
US8497920B2 (en) Method, apparatus, and computer program product for presenting burst images
WO2016123893A1 (en) Photographing method, device and terminal
US20160337593A1 (en) Image presentation method, terminal device and computer storage medium
WO2019056527A1 (en) Capturing method and device
CN112714255B (en) Shooting method and device, electronic equipment and readable storage medium
EP4044579A1 (en) Main body detection method and apparatus, and electronic device and computer readable storage medium
TW201215124A (en) Imaging apparatus, image processing method, and recording medium for recording program thereon
CN112584043B (en) Auxiliary focusing method and device, electronic equipment and storage medium
WO2018166069A1 (en) Photographing preview method, graphical user interface, and terminal
JP2016136683A (en) Imaging apparatus and control method of the same
CN113141450A (en) Shooting method, shooting device, electronic equipment and medium
CN106464799A (en) Automatic zooming method and device
CN112954193B (en) Shooting method, shooting device, electronic equipment and medium
CN104754223A (en) Method for generating thumbnail and shooting terminal
CN112367459A (en) Image processing method, electronic device, and non-volatile computer-readable storage medium
CN112188097B (en) Photographing method, photographing apparatus, terminal device, and computer-readable storage medium
CN111201773A (en) Photographing method and device, mobile terminal and computer readable storage medium
CN113014798A (en) Image display method and device and electronic equipment
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112532875B (en) Terminal device, image processing method and device thereof, and storage medium
CN115334237A (en) Portrait focusing method, device and medium based on USB camera
CN112887624B (en) Shooting method and device and electronic equipment
CN112653841B (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200526