CN108419016B - Shooting method and device and terminal - Google Patents

Shooting method and device and terminal Download PDF

Info

Publication number
CN108419016B
CN108419016B CN201810345190.9A CN201810345190A CN108419016B CN 108419016 B CN108419016 B CN 108419016B CN 201810345190 A CN201810345190 A CN 201810345190A CN 108419016 B CN108419016 B CN 108419016B
Authority
CN
China
Prior art keywords
camera
picture
screen
definition
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810345190.9A
Other languages
Chinese (zh)
Other versions
CN108419016A (en
Inventor
李仁涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201810345190.9A priority Critical patent/CN108419016B/en
Publication of CN108419016A publication Critical patent/CN108419016A/en
Application granted granted Critical
Publication of CN108419016B publication Critical patent/CN108419016B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure relates to a shooting method, a shooting device and a shooting terminal. The shooting method comprises the following steps: when the fact that a preset shooting mode is entered is detected, a first camera and a second camera are started; displaying a first framing picture of the first camera and a second framing picture of the second camera on the same screen; when the shooting operation is detected, shooting is carried out through at least one camera in the first camera and the second camera. According to the technical scheme, the front camera and the rear camera can be simultaneously started, and the framing pictures of the front camera and the rear camera are displayed on the same screen, so that a user can simultaneously see the framing pictures corresponding to the two cameras, and the user can shoot through any camera without switching the cameras, so that the flexibility is high, the operation is convenient, the diversified shooting requirements of the user are met, and the user experience is optimized; in addition, the front and back scenes can be detected through the front and back cameras.

Description

Shooting method and device and terminal
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a shooting method, an apparatus, and a terminal.
Background
With the development of terminal intellectualization, most terminals are equipped with two cameras, which are generally called as a front camera and a rear camera, wherein the front camera is arranged on the front side of the terminal, is used for shooting images on the front side of the terminal, and can be used for scenes such as self-shooting, video call and the like. The rear camera is arranged on the back of the terminal and used for shooting images on the back of the terminal.
In the related art, a user may select whether to use the front camera or the rear camera based on a requirement, and the user may switch between the front camera and the rear camera through a virtual button provided on a shooting interface. When any camera is selected to be used, a corresponding view image is displayed on the screen, and a user can only select to use the front camera or the rear camera, namely only one camera can be used for shooting.
Disclosure of Invention
In order to overcome the problems in the related art, embodiments of the present disclosure provide a shooting method, a shooting device, and a terminal, so as to solve the problem that only any camera of the terminal can be used in the related art.
According to a first aspect of the embodiments of the present disclosure, there is provided a photographing method including:
when the fact that a preset shooting mode is entered is detected, a first camera and a second camera are started;
displaying a first framing picture of the first camera and a second framing picture of the second camera on the same screen;
when the shooting operation is detected, shooting is carried out through at least one camera in the first camera and the second camera.
In an embodiment, the displaying a first view of the first camera and a second view of the second camera on the same screen includes:
reading a preset screen occupation ratio;
and displaying the first framing picture and the second framing picture on the same screen on the screen based on the screen occupation ratio.
In an embodiment, the displaying a first view of the first camera and a second view of the second camera on the same screen includes:
displaying the first view-finding picture and the second view-finding picture based on a preset screen occupation ratio;
detecting a setting operation for any one of the first finder screen and the second finder screen;
and adjusting the screen occupation proportion and the position of any corresponding viewing picture on the screen according to the direction and the displacement of the setting operation.
In an embodiment, the displaying a first view of the first camera and a second view of the second camera on the same screen includes:
determining a first definition of the first viewing picture and a second definition of the second viewing picture;
and determining the screen occupation ratio of the first framing picture and the screen occupation ratio of the second framing picture based on the first definition and the second definition.
In an embodiment, the method further comprises:
when the shooting operation is an operation for shooting a video, a finder screen to which the shot sound belongs is determined.
In one embodiment, the determining a viewfinder frame to which the captured sound belongs includes:
when the shot sound comes from a user, carrying out face recognition on the user in a view finding picture, and judging whether the user is in a speaking state;
when the user is in a speaking state, determining that the shot sound belongs to a framing picture where the user is located;
and decoding the sound into a video picture corresponding to the framing picture where the user is located.
In one embodiment, the determining a viewfinder frame to which the captured sound belongs includes:
when the shot sound comes from at least two users, judging the distance between the sound and the camera;
and decoding the sound far away from the camera into the video picture shot by the second camera, and decoding the sound close to the camera into the video picture shot by the first camera.
In an embodiment, when the photographing operation is an operation for photographing an image, the method further includes:
and carrying out preset synthesis processing on the image shot by the first camera and the image shot by the second camera.
According to a second aspect of the embodiments of the present disclosure, there is provided a photographing apparatus including:
the starting module is configured to start the first camera and the second camera when the preset shooting mode is detected to be entered;
the display module is configured to display a first framing picture of the first camera and a second framing picture of the second camera on the same screen;
a photographing module configured to perform photographing through at least one of the first and second cameras when a photographing operation is detected.
In one embodiment, the display module includes:
the reading sub-module is configured to read a preset screen occupation ratio;
a first display sub-module configured to display the first finder picture and the second finder picture on the screen on the same screen based on the screen occupation ratio.
In one embodiment, the display module includes:
a second display sub-module configured to display the first view-finding picture and the second view-finding picture based on a preset screen occupation ratio;
a detection sub-module configured to detect a setting operation for any one of the first through-screen and the second through-screen;
and the adjusting submodule is configured to adjust the screen occupation proportion and the position of any corresponding view-finding picture on the screen according to the direction and the displacement of the setting operation.
In one embodiment, the display module includes:
a first determination submodule configured to determine a first sharpness of the first through-image and a second sharpness of the second through-image;
a second determination submodule configured to determine a screen occupying ratio of the first finder screen and a screen occupying ratio of the second finder screen based on the first definition and the second definition.
In one embodiment, the apparatus further comprises:
a determination module configured to determine a finder screen to which the captured sound belongs when the capturing operation is an operation for capturing a video.
In one embodiment, the determining module comprises:
the recognition sub-module is configured to perform face recognition on a user in a view finding picture when the shot sound comes from the user, and judge whether the user is in a speaking state;
a third determining submodule configured to determine that the captured sound belongs to a framing picture in which the user is located when the user is in a speaking state;
a first decoding submodule configured to decode the sound into a video picture corresponding to a viewfinder picture in which the user is present.
In one embodiment, the determining module comprises:
the judgment sub-module is configured to judge the distance between the sound and the camera when the shot sound comes from at least two users;
the second decoding submodule is configured to decode the sound far away from the camera into the video picture shot by the second camera and decode the sound close to the camera into the video picture shot by the first camera.
In one embodiment, the apparatus further comprises:
a processing module configured to perform preset combining processing on the image captured by the first camera and the image captured by the second camera.
According to a third aspect of the embodiments of the present disclosure, there is provided a terminal, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
when the fact that a preset shooting mode is entered is detected, a first camera and a second camera are started;
displaying a first framing picture of the first camera and a second framing picture of the second camera on the same screen;
when the shooting operation is detected, shooting is carried out through at least one camera in the first camera and the second camera.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
when the fact that a preset shooting mode is entered is detected, a first camera and a second camera are started;
displaying a first framing picture of the first camera and a second framing picture of the second camera on the same screen;
when the shooting operation is detected, shooting is carried out through at least one camera in the first camera and the second camera.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the terminal can simultaneously start the front camera and the rear camera and display the framing pictures of the front camera and the rear camera on the same screen, so that a user can simultaneously see the framing pictures corresponding to the two cameras, and can shoot through any camera without switching the cameras, the flexibility is high, the operation is convenient, the diversified shooting requirements of the user are met, and the user experience is optimized; in addition, the front and back scenes can be detected through the front and back cameras.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1A is a flow chart illustrating a photographing method according to an exemplary embodiment.
Fig. 1B is a scene diagram illustrating a photographing method according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating another photographing method according to an exemplary embodiment.
Fig. 3 is a flow chart illustrating another photographing method according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating another photographing method according to an exemplary embodiment.
Fig. 5 is a flowchart illustrating another photographing method according to an exemplary embodiment.
Fig. 6 is a flow chart illustrating another photographing method according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating a photographing apparatus according to an exemplary embodiment.
Fig. 8 is a block diagram illustrating another photographing apparatus according to an exemplary embodiment.
Fig. 9 is a block diagram illustrating another photographing apparatus according to an exemplary embodiment.
Fig. 10 is a block diagram illustrating another photographing apparatus according to an exemplary embodiment.
Fig. 11 is a block diagram illustrating another photographing apparatus according to an exemplary embodiment.
Fig. 12 is a block diagram illustrating another photographing apparatus according to an exemplary embodiment.
Fig. 13 is a block diagram illustrating another photographing apparatus according to an exemplary embodiment.
Fig. 14 is a block diagram illustrating another photographing apparatus according to an exemplary embodiment.
Fig. 15 is a block diagram illustrating a suitable camera according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
Fig. 1A is a flowchart illustrating a photographing method according to an exemplary embodiment, and fig. 1B is a scene diagram illustrating a photographing method according to an exemplary embodiment; the shooting method can be applied to the UE, and the terminal in the present disclosure may be any intelligent terminal with internet access function, for example, a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), and the like may be embodied.
The terminal can access the router through the wireless local area network and access the server on the public network through the router. As shown in fig. 1A, the shooting method includes the following steps 101-103:
in step 101, when entering a preset display mode is detected, a first camera and a second camera are turned on.
In an embodiment, the first camera and the second camera are two cameras respectively disposed on the front surface and the back surface of the terminal, for example, the first camera may be a front camera, and the second camera may be a rear camera; or the first camera is a rear camera and the second camera is a front camera.
In an embodiment, the preset display mode may be a split-screen display mode, that is, a display mode capable of simultaneously turning on the front camera and the rear camera and simultaneously displaying a first view frame of the front camera and a second view frame of the rear camera on the screen.
In an embodiment, when the camera is turned on, the terminal may give an option of a display mode or a shooting mode, where the shooting mode may include turning on a front camera or a rear camera, or selecting to turn on both the front camera and the rear camera. Under the shooting mode of opening the front camera, the first frame of view that is the front camera is shown on the screen, under the shooting mode of opening the rear camera, the second frame of view that shows the rear camera is shown on the screen, under the shooting mode of opening the front camera and opening the rear camera, the first frame of view and the second frame of view that shows on the screen. Accordingly, the display mode may include displaying the first through-view screen, displaying the second through-view screen, and displaying the first through-view screen and the second through-view screen.
When a user selects a display mode in which the first view-finding picture and the second view-finding picture are displayed together or selects a shooting mode in which two cameras are started simultaneously, the fact that the user enters a preset shooting mode is determined, and the corresponding front camera and the corresponding rear camera are started.
In step 102, the framing picture of the first camera and the framing picture of the second camera are displayed on the same screen.
In one embodiment, the first viewfinder frame and the second viewfinder frame each occupy half of the screen area, for example, the first viewfinder frame is displayed on the upper half of the screen, and the second viewfinder frame is displayed on the lower half of the screen.
In practical application, the proportion of the first viewing picture and the second viewing picture on the screen can be adjusted or set according to the needs and habits of users.
In step 103, when a photographing operation is detected, photographing is performed by at least one of the first camera and the second camera.
In one embodiment, virtual buttons are arranged on the first view frame and the second view frame displayed on the screen, the virtual buttons can provide multiple functions, such as shooting, recording, focusing, previewing, turning on and off a flash lamp, and a user can execute corresponding functions by operating the virtual buttons.
In an embodiment, when detecting that a user operates a first virtual button on a first viewing screen, the terminal executes a corresponding function through the first camera, and when detecting that the user operates a second virtual button on a second viewing screen, the terminal executes a corresponding function through the second camera.
In an embodiment, in the case that the terminal performs shooting through the front camera and the rear camera, different I may be set for the front camera and the rear camera respectively2C bus, two I2The C bus is respectively connected with two ports of a CPU (Central Processing Unit) of the terminal, and the CPU passes through two I2The C bus respectively controls the front camera and the rear camera, and can solve the problem that the conflict is generated when the two cameras are opened simultaneously or the two cameras cannot be opened simultaneously.
In an exemplary scenario, as shown in fig. 1B, an example is given in which the first camera is a front camera and the second camera is a rear camera, and in the scenario shown in fig. 1B, the method includes: as the smart mobile phone at terminal, the smart mobile phone has leading camera and rear camera, when detecting to open camera application, shows the tip information to the shooting mode, and the tip information includes: the method comprises the steps of opening a front camera, opening a rear camera, opening the front camera and the rear camera, when detecting that a user clicks an option of opening the front camera and the rear camera, displaying a first view picture corresponding to the first camera and a second view picture corresponding to the second camera on the same screen on the screen, wherein the first view picture and the second view picture respectively occupy 50% of the screen, the first view picture is located above the screen, the second view picture is located below the screen, virtual buttons are displayed on the first view picture and the second view picture, and when detecting that the user clicks the first virtual button on the first view picture, shooting is carried out through the front camera.
Please refer to the following embodiments for details of how to shoot.
Therefore, according to the method provided by the embodiment of the disclosure, the front camera and the rear camera can be simultaneously started, and the framing pictures of the front camera and the rear camera are displayed on the same screen, so that a user can simultaneously see the framing pictures corresponding to the two cameras, and can optionally shoot through any camera without switching the cameras, the flexibility is high, the operation is convenient, the diversified shooting requirements of the user are met, and the user experience is optimized; in addition, the front and back scenes can be detected through the front and back cameras.
The technical solutions provided by the embodiments of the present disclosure are described below with specific embodiments.
FIG. 2 is a flow diagram illustrating another photographing method according to an exemplary embodiment; in this embodiment, by using the above method provided by the embodiment of the present disclosure, and taking how to display the first framing picture of the first camera and the second framing picture of the second camera on the same screen as an example and by referring to fig. 1B, as shown in fig. 2, the method includes the following steps:
in step 201, a preset screen occupation ratio is read.
In one embodiment, the screen occupation ratio of the first framing picture and the second framing picture can be preset and stored in the terminal.
For example, the screen occupation ratio may be 1:1, that is, the first viewfinder occupies half the area of the screen, and the second viewfinder occupies the other half of the area of the screen.
In one embodiment, the screen occupation ratio can be set arbitrarily, for example, 4:5, 3:7, etc.
In an embodiment, a screen occupying position may also be set in the terminal, and the screen occupying position may also be set arbitrarily. For example, at a 4:5 screen ratio, a frame of ratio 4 is positioned above the screen, a frame of ratio 5 is positioned below the screen, and so on. It is also possible to set the default viewing frame with a smaller ratio above the screen, the default viewing frame with a larger ratio below the screen, etc. In addition, the two viewfinder pictures can be respectively positioned at the upper right and the lower left according to the screen occupation ratio, or respectively positioned at the upper left and the lower right, for example, when the screen occupation ratio is 4:5, the viewfinder picture with the ratio of 4 is positioned at the upper right of the screen, the picture with the ratio of 5 is positioned at the lower left, and the like.
In step 202, the first framing picture and the second framing picture are displayed on the same screen on the screen based on the screen occupation ratio.
In one embodiment, after the terminal reads the screen occupation ratio, the proportion and the position of the first framing picture and the second framing picture on the screen are allocated according to the screen occupation ratio.
In an exemplary scenario, as shown in fig. 1B, the screen occupation ratio read by the terminal is 1:1, then the first framed screen is displayed above the screen, the second framed screen is displayed below the screen, and the first framed screen and the second framed screen each occupy half the area of the screen.
In the above embodiment, through the step 201 and the step 202, the first view frame and the second view frame can be displayed based on the preset screen occupation ratio, so that the personalized requirements of the user are met, and the user experience is optimized.
FIG. 3 is a flow diagram illustrating another photographing method according to an exemplary embodiment; in this embodiment, an example of how to display the first framing picture of the first camera and the second framing picture of the second camera on the same screen by using the above method provided by the embodiment of the present disclosure is described, as shown in fig. 3, including the following steps 301 and 303:
in step 301, a first framing picture and a second framing picture are displayed based on a preset screen occupation ratio.
In one embodiment, the preset screen occupation ratio may be, for example, 1: 1.
In step 302, a setting operation for any one of the first through-screen and the second through-screen is detected.
In step 303, the screen occupation ratio and the position of any corresponding viewfinder on the screen are determined according to the direction and the displacement of the setting operation.
In one embodiment, the setting operation may be, for example, a sliding operation, for example, when the sliding operation is detected on the first finder screen, the direction and displacement of the sliding operation are acquired by the sensor, the end point of the sliding operation is determined as the center point of the target position of the movement of the first finder screen, and then the first finder screen is correspondingly moved.
In an embodiment, the setting operation may be, for example, a zoom operation, for example, if a zoom operation of two fingers is detected on the first finder screen, the direction and displacement of the zoom operation are acquired, and the zoom adjustment is performed on the first finder screen based on a preset corresponding relationship between a zoom scale and the displacement.
And, through the adjustment to any one above-mentioned framed picture, can make corresponding adjustment to another framed picture.
In this embodiment, through the above steps 301 and 303, the position and size of the framing picture can be correspondingly adjusted through the setting operation of the framing picture, so as to meet the personalized requirements of the user and optimize the user experience.
FIG. 4 is a flow diagram illustrating another photographing method according to an exemplary embodiment; in this embodiment, an example of how to display the first framing picture of the first camera and the second framing picture of the second camera on the same screen by using the above method provided in the embodiment of the present disclosure is described, as shown in fig. 4, including the following steps 401 and 402:
in step 401, a first sharpness of the first framed view and a second sharpness of the second framed view are determined.
In an embodiment, the pixels of the two cameras of the terminal are usually different, so that if the first viewfinder picture and the second viewfinder picture are displayed in equal proportion, the first definition of the first viewfinder picture and the second definition of the second viewfinder picture are different, which affects the visual experience of the user. Thus, in one embodiment, a first definition and a second definition may be obtained.
In step 402, a screen occupation ratio of the first framing picture and a screen occupation ratio of the second framing picture are determined based on the first definition and the second definition.
In an embodiment, after the terminal determines the first definition and the second definition, the screen occupation ratio of the first viewfinder picture and the second viewfinder picture may be adjusted based on the first definition and the second definition, for example, the higher the definition is, the smaller the screen occupation ratio of the corresponding viewfinder picture may be, and the lower the definition is, the larger the screen occupation ratio of the corresponding viewfinder picture may be.
In this embodiment, through the above-mentioned step 401 and step 402, the screen occupation ratio of the first view-finding picture and the second view-finding picture can be adjusted based on the sharpness of the first view-finding picture and the second view-finding picture, so that the display effects of the first view-finding picture and the second view-finding picture on the screen are substantially the same, and discomfort is not brought to the eyes of the user due to the excessively large difference in the display effects, thereby optimizing the experience.
FIG. 5 is a flow diagram illustrating another photographing method according to an exemplary embodiment; the present embodiment uses the above method provided by the embodiment of the present disclosure to exemplarily explain how to determine the framing picture to which the captured sound belongs, as shown in fig. 5, the method includes the following steps 501 and 503:
in the above-described embodiment, the shooting operation may be an operation of taking a photograph or an operation of taking a video, and when the operation is an operation of taking a video, it is necessary to determine a finder screen to which a shot sound belongs.
In step 501, when the captured sound is from a user, face recognition is performed on the user in the finder screen to determine whether the user is in a speaking state.
In one embodiment, the terminal may determine a framing picture from which the captured sound comes by means of face recognition.
In a case where the captured sound is from one user, the terminal may perform face recognition on the user in the finder screen, and determine whether the user is in a speaking state based on an image of the mouth movement of the user.
In step 502, when the user is speaking, it is determined that the captured sound belongs to the finder screen where the user is located.
In one embodiment, when the terminal determines that the user in the finder screen is in a speaking state, it may determine that the captured sound is from the finder screen. For example, the first through-screen and the second through-screen both include the user, but the terminal determines that only the user in the first through-screen is in the speaking state by face recognition, and thus can determine that the captured sound is from the first through-screen.
In step 503, the sound is decoded into the video picture corresponding to the viewfinder picture in which the user is present.
In one embodiment, the terminal captures a video and recognizes the captured sound, and when a scene to which the captured sound belongs is determined, decodes the sound into a video picture corresponding to the captured scene.
In an embodiment, it may also be determined which camera the sound comes from based on a time difference between when the sound is captured by the first camera and the second camera, or based on a combination of the time difference and a dynamic picture of the mouth of the user.
In this embodiment, through the steps 501 and 503, when the shot sound comes from the same user, which framing picture the sound comes from can be determined in a face recognition manner, so that the sound is accurately decoded into a corresponding video picture, a situation that the shot video does not correspond to the sound is avoided, and the accuracy of video recording is ensured.
FIG. 6 is a flow diagram illustrating another photographing method according to an exemplary embodiment; the present embodiment uses the above method provided by the embodiment of the present disclosure to exemplarily explain how to determine the framing picture to which the captured sound belongs, as shown in fig. 6, the method includes the following steps 601 and 602:
in step 601, when the captured sound is from at least two users, the distance between the sound and the camera is determined.
In one embodiment, the terminal may determine that the captured sound is from several users through sound recognition, and then determine the distance between the sound and the camera through sound size and the like when the captured sound is from at least two users.
In step 602, the sound farther from the camera is decoded into the video picture taken by the second camera, and the sound closer to the camera is decoded into the video picture taken by the first camera.
In the embodiment shown in fig. 1A, it has been explained that the first camera may be a front camera and the second camera may be a rear camera, and in general, the object shot by the first camera is closer to the terminal and the object shot by the second camera is farther from the terminal, so that the farther sound is decoded into the video picture shot by the second camera and the closer sound is decoded into the video picture shot by the first camera.
In an embodiment, with the above embodiment, after the shooting by the first camera and the second camera is completed, the first image shot by the first camera and the second image shot by the second camera are subjected to synthesis processing, so as to obtain a complete panoramic image. The images shot by the front camera and the rear camera can be spliced based on other jigsaw or image processing methods, so that various images can be DIY-generated.
Further, after the photographing is completed, including the photographing of the image and the video, the preview page can be entered through a preview function provided on the finder screen, that is, the photographed content corresponding to the first finder screen and the photographed content corresponding to the second finder screen are previewed on the screen on the same screen.
In this embodiment, through the step 602 and the step 602, when the shot sound comes from the same at least two users, which framing picture the sound comes from can be determined according to the distance between the sound and the camera, so that the sound is accurately decoded into the corresponding video picture, the situation that the shot video does not correspond to the sound is avoided, and the accuracy of video recording is ensured.
Fig. 7 is a block diagram illustrating a photographing apparatus according to an exemplary embodiment, as shown in fig. 7, the photographing apparatus including: a start module 710, a display module 720, and a photographing module 730.
The starting module 710 is configured to start the first camera and the second camera when entering a preset shooting mode is detected;
a display module 720 configured to display a first view of the first camera and a second view of the second camera on the same screen;
and a photographing module 730 configured to photograph by turning on at least one of the first and second cameras turned on by the turning-on module 710 when a photographing operation is detected.
Fig. 8 is a block diagram of another photographing apparatus according to an exemplary embodiment, and as shown in fig. 8, on the basis of the above-mentioned embodiment shown in fig. 7, in an embodiment, the display module 720 may include: a read sub-module 721 and a first display sub-module 722.
A reading sub-module 721 configured to read a preset screen occupation ratio;
a first display submodule 722 configured to display the first finder screen and the second finder screen on the same screen based on the screen occupying ratio read by the reading submodule 721.
Fig. 9 is a block diagram of another photographing apparatus according to an exemplary embodiment, and as shown in fig. 9, on the basis of the above-mentioned embodiment shown in fig. 7, in an embodiment, the display module 720 may include: a second display submodule 723, a detection submodule 724, and an adjustment submodule 725.
A second display sub-module 723 configured to display the first view-finding screen and the second view-finding screen based on a preset screen-occupying ratio;
a detection submodule 724 configured to detect a setting operation for any one of the first through-screen and the second through-screen displayed by the second display submodule 723;
an adjusting submodule 725 configured to adjust the screen occupation ratio and the position of the corresponding any one of the viewfinder frames on the screen according to the direction and the displacement of the setting operation detected by the detecting submodule 724.
Fig. 10 is a block diagram of another photographing apparatus according to an exemplary embodiment, and as shown in fig. 10, on the basis of the embodiment shown in fig. 7, the display module 720 may include: a first determination submodule 726 and a second determination submodule 727.
A first determining submodule 726 configured to determine a first sharpness of the first framed screen and a second sharpness of the second framed screen;
a second determination sub-module 727 configured to determine the screen proportion of the first framed screen and the screen proportion of the second framed screen based on the first definition and the second definition determined by the first determination sub-module 726.
Fig. 11 is a block diagram of another photographing apparatus according to an exemplary embodiment, and as shown in fig. 11, the apparatus may further include, on the basis of the above-described embodiment shown in fig. 7: a determination module 740.
A determination module 740 configured to determine a through-view to which the captured sound belongs when the capturing operation is an operation for capturing a video.
Fig. 12 is a block diagram of another photographing apparatus according to an exemplary embodiment, and as shown in fig. 12, on the basis of the above embodiment shown in fig. 11, the determining module 740 may include: an identification submodule 741, a third determination submodule 742, and a first decoding submodule 743.
The recognition sub-module 741, configured to perform face recognition on the user in the view finding picture when the captured sound comes from a user, and determine whether the user is in a speaking state;
a third determining submodule 742, configured to determine that the captured sound belongs to a view finding picture in which the user is located when the identifying submodule 741 determines that the user is in a speaking state;
a first decoding submodule 743 configured to decode the sound determined by the third determining submodule 742 into a video picture corresponding to a framing picture in which the user is present.
Fig. 13 is a block diagram of another photographing apparatus according to an exemplary embodiment, and as shown in fig. 13, on the basis of the above embodiment shown in fig. 11, the determining module 740 may include: a decision sub-module 743 and a second decoding sub-module 744.
A judgment submodule 743 configured to judge a distance between the sound and the camera when the photographed sound comes from at least two users;
the second decoding submodule 744 is configured to decode the sound determined by the determining submodule 743 to be farther from the camera into the video picture taken by the second camera, and decode the sound determined to be closer to the camera into the video picture taken by the first camera.
Fig. 14 is a block diagram of another photographing apparatus according to an exemplary embodiment, and as shown in fig. 14, the apparatus may further include, on the basis of the above-described embodiment shown in fig. 7: a processing module 750.
And a processing module 750 configured to perform preset combining processing on the image captured by the first camera and the image captured by the second camera.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 15 is a block diagram illustrating a suitable camera according to an exemplary embodiment. For example, the apparatus 1500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, or the like, user device.
Referring to fig. 15, apparatus 1500 may include one or more of the following components: processing component 1502, memory 1504, power component 1506, multimedia component 1508, audio component 1510, input/output (I/O) interface 1512, sensor component 1514, and communications component 1516.
The processing component 1502 generally controls overall operation of the device 1500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 1502 may include one or more processing components 1502 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 1502 may include one or more modules that facilitate interaction between processing component 1502 and other components. For example, processing component 1502 may include a multimedia module to facilitate interaction between multimedia component 1508 and processing component 1502.
The memory 1504 is configured to store various types of data to support operations at the apparatus 1500. Examples of such data include instructions for any application or method operating on the device 1500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1504 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The battery assembly 1506 provides power to the various components of the device 1500. Battery assembly 1506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 1500.
The multimedia component 1508 includes a screen that provides an output interface between the device 1500 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, multimedia component 1508 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 1500 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1510 is configured to output and/or input audio signals. For example, the audio component 1510 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 1500 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1504 or transmitted via the communication component 1516. In some embodiments, audio component 1510 also includes a speaker for outputting audio signals.
The I/O interface 1512 provides an interface between the processing component 1502 and peripheral interface modules, which can be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1514 includes one or more sensors for providing status assessment of various aspects of the apparatus 1500. For example, the sensor assembly 1514 can detect an open/closed state of the device 1500, the relative positioning of components, such as a display and keypad of the device 1500, the sensor assembly 1514 can also detect a change in position of the device 1500 or a component of the device 1500, the presence or absence of user contact with the device 1500, orientation or acceleration/deceleration of the device 1500, and a change in temperature of the device 1500. The sensor assembly 1514 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1516 is configured to facilitate wired or wireless communication between the apparatus 1500 and other devices. The apparatus 1500 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1516 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1516 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 1504 comprising instructions, executable by the processing component 1502 of the apparatus 1500 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Wherein the processing component 1502 is configured to:
when the fact that a preset shooting mode is entered is detected, a first camera and a second camera are started;
displaying a first framing picture of the first camera and a second framing picture of the second camera on the same screen;
when the shooting operation is detected, shooting is carried out through at least one camera in the first camera and the second camera.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A photographing method, characterized in that the method comprises:
when a preset shooting mode is detected to be entered, starting a first camera and a second camera, wherein the pixels of the first camera and the second camera are different;
with the screen display first frame picture of first camera with the second frame picture of second camera includes: determining a first definition of the first viewing picture and a second definition of the second viewing picture by the terminal, and adjusting the screen occupation proportion of the first viewing picture and the screen occupation proportion of the second viewing picture based on the first definition and the second definition, wherein the screen occupation proportion of any viewing picture is in negative correlation with the height of the corresponding definition;
when the shooting operation is detected, shooting is carried out through at least one camera in the first camera and the second camera.
2. The method of claim 1, further comprising:
when the shooting operation is an operation for shooting a video, a finder screen to which the shot sound belongs is determined.
3. The method of claim 2, wherein the determining the viewfinder frame to which the captured sound belongs comprises:
when the shot sound comes from a user, carrying out face recognition on the user in a view finding picture, and judging whether the user is in a speaking state;
when the user is in a speaking state, determining that the shot sound belongs to a framing picture where the user is located;
and decoding the sound into a video picture corresponding to the framing picture where the user is located.
4. The method of claim 2, wherein the determining the viewfinder frame to which the captured sound belongs comprises:
when the shot sound comes from at least two users, judging the distance between the sound and the camera;
and decoding the sound far away from the camera into the video picture shot by the second camera, and decoding the sound close to the camera into the video picture shot by the first camera.
5. The method according to claim 1, wherein when the photographing operation is an operation for photographing an image, the method further comprises:
and carrying out preset synthesis processing on the image shot by the first camera and the image shot by the second camera.
6. A camera, characterized in that the camera comprises:
the starting module is configured to start a first camera and a second camera when entering a preset shooting mode is detected, wherein the pixels of the first camera and the second camera are different;
the display module is configured to display a first framing picture of the first camera and a second framing picture of the second camera on the same screen;
the display module comprises a first determining sub-module and a second determining sub-module; the first determination submodule is configured to determine a first definition of the first viewing picture and a second definition of the second viewing picture by a terminal; the second determining submodule is configured to adjust the screen occupation proportion of the first framing picture and the screen occupation proportion of the second framing picture by the terminal based on the first definition and the second definition, and the screen occupation proportion of any framing picture is in negative correlation with the height of the corresponding definition;
a photographing module configured to perform photographing through at least one of the first and second cameras when a photographing operation is detected.
7. The apparatus of claim 6, further comprising:
a determination module configured to determine a finder screen to which the captured sound belongs when the capturing operation is an operation for capturing a video.
8. The apparatus of claim 7, wherein the determining module comprises:
the recognition sub-module is configured to perform face recognition on a user in a view finding picture when the shot sound comes from the user, and judge whether the user is in a speaking state;
a third determining submodule configured to determine that the captured sound belongs to a framing picture in which the user is located when the user is in a speaking state;
a first decoding submodule configured to decode the sound into a video picture corresponding to a viewfinder picture in which the user is present.
9. The apparatus of claim 7, wherein the determining module comprises:
the judgment sub-module is configured to judge the distance between the sound and the camera when the shot sound comes from at least two users;
the second decoding submodule is configured to decode the sound far away from the camera into the video picture shot by the second camera and decode the sound close to the camera into the video picture shot by the first camera.
10. The apparatus of claim 6, further comprising:
a processing module configured to perform preset combining processing on the image captured by the first camera and the image captured by the second camera.
11. A terminal, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
when a preset shooting mode is detected to be entered, starting a first camera and a second camera, wherein the pixels of the first camera and the second camera are different;
with the screen display first frame picture of first camera with the second frame picture of second camera includes: determining a first definition of the first viewing picture and a second definition of the second viewing picture by the terminal, and adjusting the screen occupation proportion of the first viewing picture and the screen occupation proportion of the second viewing picture based on the first definition and the second definition, wherein the screen occupation proportion of any viewing picture is in negative correlation with the height of the corresponding definition;
when the shooting operation is detected, shooting is carried out through at least one camera in the first camera and the second camera.
12. A computer-readable storage medium, on which a computer program is stored, which program, when executed by a processor, carries out the steps of:
when a preset shooting mode is detected to be entered, starting a first camera and a second camera, wherein the pixels of the first camera and the second camera are different;
with the screen display first frame picture of first camera with the second frame picture of second camera includes: determining a first definition of the first viewing picture and a second definition of the second viewing picture by the terminal, and adjusting the screen occupation proportion of the first viewing picture and the screen occupation proportion of the second viewing picture based on the first definition and the second definition, wherein the screen occupation proportion of any viewing picture is in negative correlation with the height of the corresponding definition;
when the shooting operation is detected, shooting is carried out through at least one camera in the first camera and the second camera.
CN201810345190.9A 2018-04-17 2018-04-17 Shooting method and device and terminal Active CN108419016B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810345190.9A CN108419016B (en) 2018-04-17 2018-04-17 Shooting method and device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810345190.9A CN108419016B (en) 2018-04-17 2018-04-17 Shooting method and device and terminal

Publications (2)

Publication Number Publication Date
CN108419016A CN108419016A (en) 2018-08-17
CN108419016B true CN108419016B (en) 2022-03-11

Family

ID=63135996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810345190.9A Active CN108419016B (en) 2018-04-17 2018-04-17 Shooting method and device and terminal

Country Status (1)

Country Link
CN (1) CN108419016B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110475015A (en) * 2018-09-03 2019-11-19 王闯 A kind of mobile phone front camera and rear camera synchronous working dual display function
KR102565422B1 (en) * 2018-10-19 2023-08-09 라인 가부시키가이샤 Method, computer apparatus, and computer program for providing authentication data
CN110035230A (en) * 2019-04-15 2019-07-19 珠海格力电器股份有限公司 A kind of picture display control method, system and intelligent terminal based on Folding screen
CN110266983A (en) * 2019-06-30 2019-09-20 联想(北京)有限公司 A kind of image processing method, equipment and storage medium
US11409434B2 (en) 2019-06-30 2022-08-09 Lenovo (Beijing) Co., Ltd. Image collection and processing method, apparatus, and storage medium
CN110505411B (en) * 2019-09-03 2021-05-07 RealMe重庆移动通信有限公司 Image shooting method and device, storage medium and electronic equipment
CN110784674B (en) * 2019-10-30 2022-03-15 北京字节跳动网络技术有限公司 Video processing method, device, terminal and storage medium
CN110740261A (en) * 2019-10-30 2020-01-31 北京字节跳动网络技术有限公司 Video recording method, device, terminal and storage medium
WO2021083146A1 (en) * 2019-10-30 2021-05-06 北京字节跳动网络技术有限公司 Video processing method and apparatus, and terminal and storage medium
CN114449134B (en) * 2020-10-30 2024-02-13 华为技术有限公司 Shooting method and terminal equipment
CN112911060B (en) * 2021-01-22 2022-07-19 维沃移动通信(杭州)有限公司 Display control method, first display control device and first electronic equipment
CN115484380A (en) * 2021-06-16 2022-12-16 荣耀终端有限公司 Shooting method, graphical user interface and electronic equipment
CN113645429B (en) * 2021-08-23 2023-03-21 联想(北京)有限公司 Video acquisition method and electronic equipment
CN113923351B (en) * 2021-09-09 2022-09-27 荣耀终端有限公司 Method, device and storage medium for exiting multi-channel video shooting

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024272A (en) * 2012-12-14 2013-04-03 广东欧珀移动通信有限公司 Double camera control device, method and system of mobile terminal and mobile terminal
CN104349107A (en) * 2013-08-07 2015-02-11 联想(北京)有限公司 Double-camera video recording display method and electronic equipment
CN105245811A (en) * 2015-10-16 2016-01-13 广东欧珀移动通信有限公司 Video recording method and device
CN105578097A (en) * 2015-07-10 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Video recording method and terminal
CN106101525A (en) * 2016-05-31 2016-11-09 北京奇虎科技有限公司 Application call dual camera carries out the method and device shot
CN107395969A (en) * 2017-07-26 2017-11-24 维沃移动通信有限公司 A kind of image pickup method and mobile terminal

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103873756A (en) * 2012-12-14 2014-06-18 天津三星光电子有限公司 Method and device for adopting double lenses for simultaneously taking photos in portable terminal
CN107509029A (en) * 2013-01-07 2017-12-22 华为技术有限公司 A kind of image processing method and device
CN103945045A (en) * 2013-01-21 2014-07-23 联想(北京)有限公司 Method and device for data processing
KR102145190B1 (en) * 2013-11-06 2020-08-19 엘지전자 주식회사 Mobile terminal and control method thereof
KR102153436B1 (en) * 2014-01-15 2020-09-08 엘지전자 주식회사 Mobile terminal and method for controlling the same
US20170134651A1 (en) * 2014-04-22 2017-05-11 Mei-Ling Lo Portable Device for Generating Wide Angle Images
CN106506924A (en) * 2016-12-05 2017-03-15 深圳天珑无线科技有限公司 The image pickup method and mobile terminal of mobile terminal
CN107071329A (en) * 2017-02-27 2017-08-18 努比亚技术有限公司 The method and device of automatic switchover camera in video call process
CN107317963A (en) * 2017-05-24 2017-11-03 努比亚技术有限公司 A kind of double-camera mobile terminal control method, mobile terminal and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024272A (en) * 2012-12-14 2013-04-03 广东欧珀移动通信有限公司 Double camera control device, method and system of mobile terminal and mobile terminal
CN104349107A (en) * 2013-08-07 2015-02-11 联想(北京)有限公司 Double-camera video recording display method and electronic equipment
CN105578097A (en) * 2015-07-10 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Video recording method and terminal
CN105245811A (en) * 2015-10-16 2016-01-13 广东欧珀移动通信有限公司 Video recording method and device
CN106101525A (en) * 2016-05-31 2016-11-09 北京奇虎科技有限公司 Application call dual camera carries out the method and device shot
CN107395969A (en) * 2017-07-26 2017-11-24 维沃移动通信有限公司 A kind of image pickup method and mobile terminal

Also Published As

Publication number Publication date
CN108419016A (en) 2018-08-17

Similar Documents

Publication Publication Date Title
CN108419016B (en) Shooting method and device and terminal
CN111586282B (en) Shooting method, shooting device, terminal and readable storage medium
CN106210496B (en) Photo shooting method and device
CN105282441B (en) Photographing method and device
CN112114765A (en) Screen projection method and device and storage medium
US11310443B2 (en) Video processing method, apparatus and storage medium
CN104869314A (en) Photographing method and device
KR101701814B1 (en) Method and apparatus for displaying framing information
CN113364965A (en) Shooting method and device based on multiple cameras and electronic equipment
CN114500821B (en) Photographing method and device, terminal and storage medium
CN111614910B (en) File generation method and device, electronic equipment and storage medium
CN111586280B (en) Shooting method, shooting device, terminal and readable storage medium
CN111461950B (en) Image processing method and device
CN114422687B (en) Preview image switching method and device, electronic equipment and storage medium
US11252341B2 (en) Method and device for shooting image, and storage medium
CN111698414B (en) Image signal processing method and device, electronic device and readable storage medium
CN107707819B (en) Image shooting method, device and storage medium
CN107682623B (en) Photographing method and device
CN110874829B (en) Image processing method and device, electronic device and storage medium
CN112346606A (en) Picture processing method and device and storage medium
CN114339017B (en) Distant view focusing method, device and storage medium
CN113766115B (en) Image acquisition method, mobile terminal, device and storage medium
CN111225158B (en) Image generation method and device, electronic equipment and computer readable storage medium
CN109862252B (en) Image shooting method and device
CN106713748B (en) Method and device for sending pictures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant