CN107347140B - A kind of image pickup method, mobile terminal and computer readable storage medium - Google Patents

A kind of image pickup method, mobile terminal and computer readable storage medium Download PDF

Info

Publication number
CN107347140B
CN107347140B CN201710735010.3A CN201710735010A CN107347140B CN 107347140 B CN107347140 B CN 107347140B CN 201710735010 A CN201710735010 A CN 201710735010A CN 107347140 B CN107347140 B CN 107347140B
Authority
CN
China
Prior art keywords
camera
shooting
data
controlling
light source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710735010.3A
Other languages
Chinese (zh)
Other versions
CN107347140A (en
Inventor
张胜利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201710735010.3A priority Critical patent/CN107347140B/en
Publication of CN107347140A publication Critical patent/CN107347140A/en
Application granted granted Critical
Publication of CN107347140B publication Critical patent/CN107347140B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a kind of image pickup method, mobile terminal and computer readable storage medium, method includes: to obtain collected first data of the first camera and collected second data of second camera;According to default display frame rate, controls display screen and alternately display the first data and the second data;Wherein, the first data and the second data are image data or video data.The display screen of mobile terminal of the present invention can form two full screen pictures being independent of each other in different perspectives, when multiple users shoot simultaneously, the content of shooting of different user can be alternately displayed on a display screen, so that user can see the full screen pictures of oneself shooting in respective visual angle, while realizing multiple users share shooting and display, guarantees not generate between two pictures and influence each other.

Description

Shooting method, mobile terminal and computer readable storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a shooting method, a mobile terminal, and a computer-readable storage medium.
Background
The display screen is the most important human-computer interaction medium of various intelligent mobile terminals at present, and most of the mobile terminals transmit various information such as videos, games, character reading, pictures and the like through the display screen. With the wide application of mobile photographing equipment, more and more mobile terminals are equipped with a photographing function, and some mobile terminals are even equipped with two or more cameras, so as to meet the photographing requirements of users. However, although the mobile terminal is provided with a plurality of cameras, since the current liquid crystal display can only see one full-screen picture within one time, even if the photographing device has two or more cameras, only one photographing preview picture can be displayed in the full-screen mode at the same time.
Disclosure of Invention
The embodiment of the invention provides a shooting method, a mobile terminal and a computer readable storage medium, which aim to solve the problem of display limitation in the prior art that only one shooting preview picture can be displayed in a full screen at the same time.
In a first aspect, an embodiment of the present invention provides a shooting method, which is applied to a mobile terminal, where the mobile terminal includes a display screen, a first camera, and a second camera, and the display screen includes: the backlight source comprises at least two light source components, and each light source component is respectively arranged on different side edges of the light guide plate; wherein, the light that the first light source subassembly in at least two light source subassemblies sent out throws to display panel through the light guide plate to the incident angle that is no less than first angle and forms first full-screen picture, and the visual scope of first full-screen picture is: a light angle range of the light exiting the display panel at an exit angle not less than the first angle; the light emitted by a second light source in the at least two light source components is projected to the display panel through the light guide plate at an incident angle not smaller than a second angle to form a second full screen picture, and the visible range of the second full screen picture is as follows: a light angle range of the light exiting the display panel at an exit angle not less than the second angle; the visible range of the second full screen picture and the visible range of the first full screen picture are not overlapped within a preset angle range;
the shooting method comprises the following steps:
acquiring first data acquired by a first camera and second data acquired by a second camera;
controlling a display screen to alternately display first data and second data according to a preset display frame rate;
wherein the first data and the second data are image data or video data.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes a display screen, a first camera, and a second camera, and the display screen includes: the backlight source comprises at least two light source components, and each light source component is respectively arranged on different side edges of the light guide plate; wherein, the light that the first light source subassembly in at least two light source subassemblies sent out throws to display panel through the light guide plate to the incident angle that is no less than first angle and forms first full-screen picture, and the visual scope of first full-screen picture is: a light angle range of the light exiting the display panel at an exit angle not less than the first angle; the light emitted by a second light source in the at least two light source components is projected to the display panel through the light guide plate at an incident angle not smaller than a second angle to form a second full screen picture, and the visible range of the second full screen picture is as follows: a light angle range of the light exiting the display panel at an exit angle not less than the second angle; the visible range of the second full screen picture and the visible range of the first full screen picture are not overlapped within a preset angle range;
wherein, mobile terminal still includes:
the first acquisition module is used for acquiring first data acquired by the first camera and second data acquired by the second camera;
the display module is used for controlling the display screen to alternately display the first data and the second data according to a preset display frame rate;
wherein the first data and the second data are image data or video data.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes a processor, a memory, and a computer program stored in the memory and operable on the processor, and the processor implements the steps of the shooting method when executing the computer program.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the shooting method described above.
In the shooting method, the mobile terminal and the computer readable storage medium in the embodiment of the invention, the display screen of the mobile terminal can form two full-screen pictures which are not mutually influenced at different visual angles, and when a plurality of users shoot simultaneously, shooting contents of different users can be alternately displayed on the display screen, so that the users can see the full-screen pictures shot by themselves in respective visual angles, and the mutual influence between the two pictures is avoided while the multi-user shared shooting and displaying are realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a first schematic view of a display screen according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a display screen in an embodiment of the invention;
FIG. 3 is a flow chart illustrating a photographing method according to an embodiment of the present invention;
FIG. 4 is a timing diagram illustrating a photographing method according to an embodiment of the invention;
fig. 5 is a first schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 7 is a first schematic block diagram of a mobile terminal according to an embodiment of the present invention;
fig. 8 is a schematic block diagram of a mobile terminal according to an embodiment of the present invention;
fig. 9 shows a block diagram of a mobile terminal according to an embodiment of the present invention.
The device comprises a display panel 1, a display panel 2, a backlight source 3, a light guide plate 4, a non-light-transmitting lampshade 5, a first camera 6, a second camera 7 and a front camera;
21. a light source assembly;
31. a light guide surface;
71. first leading camera, 72, second leading camera.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The shooting method of the embodiment of the invention is applied to a mobile terminal, the mobile terminal comprises a first camera, a second camera and a display screen, and specifically, as shown in fig. 1, the display screen specifically comprises: the backlight module comprises a display panel 1, a backlight 2 and a light guide plate 3, wherein the backlight 2 comprises at least two light source assemblies 21, and each light source assembly 21 is arranged on different sides of the light guide plate 3.
Specifically, light emitted by a first light source assembly 21 of the at least two light source assemblies 21 is projected to the display panel 1 through the light guide plate 3 at an incident angle not smaller than a first angle to form a first full screen image; the light emitted by the second light source assembly of the at least two light source assemblies 21 is projected to the display panel 1 through the light guide plate 3 at an incident angle not smaller than the second angle to form a second full screen image. Wherein, the visible range of the first full screen picture is as follows: the first light source assembly emits the light ray angle range of the display panel 1 at an exit angle not smaller than the first angle, and the visible range of the second full screen picture is as follows: the second light source assembly emits light out of the display panel 1 at an exit angle not less than the second angle. Therefore, when the users share the screen, the users can respectively see the respective full-screen pictures in the own visual angle range. Furthermore, the visible range of the second full-screen image and the visible range of the first full-screen image are not overlapped within a preset angle range, so that the images among different users are not interfered and influenced, and the user experience is improved.
Further, at least two sets of light guide surfaces 31 with different inclination angles are disposed on one side of the light guide plate 3 away from the display panel 1. Wherein, the inclination of every group leaded light face 31 is the same, and a light source subassembly 21 corresponds a set of leaded light face 31, and the effect of leaded light face 31 is the direction of projection that will guide the light that corresponds light source subassembly 21 and send, and specifically, leaded light face 31 can be thrown away with fixed refraction angle with the light of throwing to self.
Specifically, the first light source assembly and the second light source assembly are disposed opposite to each other, and the two sets of light guide surfaces 31 of the light guide plate 3 form at least one zigzag groove surface. The number of each group of light guide surfaces 31 is equal to the number of the formed zigzag groove surfaces, for example, one group of light guide surfaces 31 includes one light guide surface 31, and then the number of the zigzag groove surfaces formed by two groups of light guide surfaces 31 is one. The light guide surface 31 with the first inclination angle in the sawtooth-shaped groove surface is used for guiding the light projection direction of the first light source assembly, and the light guide surface 31 with the second inclination angle in the sawtooth-shaped groove surface is used for guiding the light projection direction of the second light source assembly.
Further, as shown in fig. 2, the first light source assembly and the second light source assembly are disposed oppositely, the backlight 2 further includes a third light source assembly and a fourth light source assembly disposed oppositely, the first light source assembly, the second light source assembly, the third light source assembly and the fourth light source assembly enclose a quadrangle, and correspondingly, the light guide surface 31 of the light guide plate 3 forms at least one square-cone-shaped groove surface. Wherein, the number of each group of light guide surfaces 31 is equal to the number of the square tapered groove surfaces formed, for example, a group of light guide surfaces comprises four light guide surfaces 31, and the number of the zigzag groove surfaces formed by the light guide surfaces is four. The light guide surface 31 with the third inclination angle in the square conical groove surface is used for guiding the light projection direction of the first light source assembly, the light guide surface 31 with the fourth inclination angle in the square conical groove surface is used for guiding the light projection direction of the second light source assembly, the light guide surface 31 with the fifth inclination angle in the square conical groove surface is used for guiding the light projection direction of the third light source assembly, and the light guide surface 31 with the sixth inclination angle in the square conical groove surface is used for guiding the light projection direction of the fourth light source assembly.
Further, in order to enhance the light transmittance, a non-light-transmitting lampshade 4 covers each light source assembly 21, and the non-light-transmitting lampshade 4 can limit the light emitted by each light source assembly 21 within a preset angle, and ensure that the light source assembly 21 avoids the light leakage phenomenon.
Each light source assembly 21 includes at least one row of leds, and each row of leds includes at least one led, wherein light between different leds is not blocked.
The mobile terminal display screen comprises at least two groups of light source components, wherein light rays emitted by the first light source components form a first full screen picture through the display panel, and light rays emitted by the second light source components form a second full screen picture through the display panel, so that a user can view respective full screen pictures when the screen is shared. In addition, the visible range of the second full-screen image and the visible range of the first full-screen image are not overlapped within a preset angle range, so that the respective full-screen images can not interfere and influence the images of the other side when different users share the screen, and the user experience is further improved.
Further, as shown in fig. 3, the shooting method provided by the embodiment of the present invention specifically includes the following steps:
step 301: and acquiring first data acquired by the first camera and second data acquired by the second camera.
Wherein the first data and the second data are image data or video data. Specifically, the mobile terminal comprises a backlight source and a display panel, wherein the backlight source comprises at least two light source assemblies, and different light source assemblies respectively correspond to first data collected by a first camera and second data collected by a second camera. Specifically, when two users, namely a left user and a right user, share and shoot simultaneously, the left user A located in the visual range of the first full-screen picture collects first image or video data through the first camera, and the right user B located in the visual range of the second full-screen picture collects second image or video data through the second camera.
Step 302: and controlling the display screen to alternately display the first data and the second data according to a preset display frame rate.
When a user starts screen sharing, the first data or the second data corresponding to each light source assembly are alternately displayed according to a preset display frame rate, so that the effect that different users share the screen at the same time is achieved. For example, the first light source component corresponds to the first camera to control the display screen to display the first data for the left user a in the visible range of the first full screen, and the second light source component corresponds to the second camera to control the display screen to display the second data for the right user B in the visible range of the second full screen.
Based on the structure of the display screen, the embodiment of the invention can realize that the images of two full screens can be watched in different angle ranges, specifically, the first data collected by the first camera can be watched in the visual range of the first full screen image, and the second data collected by the second camera can be watched in the visual range of the second full screen image, so that two users can see the images of the two full screens, the two full screens cannot mutually influence, and the use experience of the users can be favorably improved.
Further, step 302 specifically includes: turning on the first light source component, turning off the second light source component at the same time, and controlling the display screen to display first data; after a preset time interval, closing the first light source component, simultaneously opening the second light source component, and controlling the display screen to display second data; and after the interval preset time interval, circularly executing the steps of opening the first light source component, closing the second light source component, controlling the display screen to display the first data to the interval preset time interval, closing the first light source component, simultaneously opening the second light source component, and controlling the display screen to display the second data until the screen sharing function is detected to be closed, or until the first camera or the second camera is detected to be closed.
Specifically, after the display screen is lighted, the shot data picture is refreshed and displayed according to a preset frame rate, the LED lamps on the two sides are turned on simultaneously, at this time, the mobile terminal detects the on state of the screen sharing function, and if the on state is detected to indicate that the screen sharing function is turned on and both the first camera and the second camera are turned on, the first data collected by the first camera and the second data collected by the second camera are acquired, that is, step 301 is executed. If the on state is detected to indicate that the screen sharing function is turned off or the first camera or the second camera is turned off, all light source assemblies in the backlight source are turned on, namely, the LED lamps on the two sides are turned on simultaneously.
The specific implementation principle of the screen sharing function when two users (left user a and right user B) shoot simultaneously will be further described below with reference to the specific schematic timing diagram of alternately lighting the LEDs. Specifically, as shown in fig. 4, (1) the right LED lamp is turned on, and the left LED lamp is turned off at the same time, the first frame of the display panel first displays the first frame display content of the first data collected by the first camera, and at this time, only the left user a can see the frame display content. (2) After a preset time interval, the right LED lamp is turned off, the left LED lamp is turned on at the same time, a second frame of the display panel displays first frame display content of second data collected by the second camera, and only the right user B can see the picture display content. (3) After a preset time interval, the right LED lamp is turned on, the left LED lamp is turned off at the same time, the third frame of the display panel displays the second frame display content of the first data collected by the first camera, and only the left user A can see the picture display content. (4) After a preset time interval, the right LED lamp is turned off, the left LED lamp is turned on at the same time, the fourth frame of the display panel displays the second frame display content of the second data collected by the second camera, and only the right user B can see the picture display content.
And alternately displaying the data pictures acquired by the first camera and the second camera according to the method, wherein when the preset time interval is small enough, namely the preset display frame rate is fast enough, the users at the left side and the right side can watch coherent and non-flickering display contents. In order to ensure consistency of image display, the preset display frame rate in the embodiment of the present invention is generally greater than a display frame rate of a general display screen. Assuming that the display frame rate in the prior art is 60 frames/second, the preset display frame rate in the embodiment of the present invention needs to be 120 frames/second, so as to ensure the display effect of each data frame. That is, after the screen sharing function is turned on, the mobile terminal alternately displays data picture frames that the left and right users want to shoot, so that the left and right users can watch respective required pictures at the same time and can only watch the respective required pictures, thereby improving user experience.
Further, after alternately displaying the first data collected by the first camera and the second data collected by the second camera, the method further includes: detecting whether a shooting control instruction is received or not; and if the shooting control instruction is detected, controlling the first camera and/or the second camera to execute shooting operation according to the shooting control instruction. Wherein the shooting control instruction is associated with the rotation angle and the rotation direction of the first camera and/or the second camera. Specifically, the first camera and the second camera can be respectively deviated from the view finding direction of the corresponding user according to a certain angle according to the shooting control instruction, the deflection can be controlled by the angles of the two cameras, the left camera can be deviated from the view finding direction of the left user A, and the right camera can be deviated from the view finding direction of the right user B.
Specifically, as shown in fig. 5, the first camera 5 and the second camera 6 are both rear cameras with adjustable shooting angles, that is, both the first camera 5 and the second camera 6 can rotate, and the mobile terminal further includes at least one front camera 7. Specifically, the step of detecting whether the photographing control instruction is received includes: and controlling at least one front camera to detect the control action. If the shooting control instruction is detected, the step of controlling the first camera and/or the second camera to execute the shooting operation according to the shooting control instruction comprises the following steps: and if the control action is detected, controlling the first camera and/or the second camera to execute shooting operation according to the control action.
Specifically, as shown in fig. 6, the at least one front camera 7 includes a first front camera 71 and a second front camera 72. The step of controlling at least one front camera to detect the control action comprises the following steps: and respectively controlling the first front camera and the second front camera to perform control action detection. If the control action is detected, the step of controlling the first camera and/or the second camera to execute the shooting operation according to the control action comprises the following steps: if the first front-facing camera detects a first control action in the visible range of the first full-screen picture, controlling the first camera to carry out rotary shooting according to the first control action; and if the second front-facing camera detects a second control action in the visible range of the second full-screen picture, controlling the second camera to carry out rotary shooting according to the second control action. That is, the control action is captured by the front camera, and the control action comprises: at least one of a gesture control action, a head swing control action, and a finger pointing control action. If the head acts (if the camera rotates a certain angle leftwards when the head is turned leftwards, and the head is lifted upwards to rotate a certain angle upwards), gesture recognition (the camera rotates a certain angle rightwards when the fingers rotate rightwards, and the camera rotates a certain angle upwards when the fingers rotate upwards) and the like are carried out. And if the first front camera detects that the user twists the head leftwards in the visual range of the first full-screen picture, controlling the first camera to rotate leftwards by a certain angle and executing shooting operation.
Specifically, in a specific application process, a user unlocks the display screen and starts the camera, the LEDs on the two sides are controlled to be started simultaneously, and the display screen refreshes and displays a shooting picture of the user. When a plurality of users shoot simultaneously, whether the users click to enter the shared shooting function is judged, if not, the LEDs on the two sides are continuously controlled to be simultaneously started, and data pictures collected by the camera are displayed. If the fact that the user clicks to enter the shared shooting function is detected, previews or shooting pictures of the first camera and the second camera (the left camera and the right camera) are displayed alternately according to a sequence diagram shown in a sequence diagram 4, so that the left shooting user and the right shooting user can watch full-screen previews or shooting pictures required by the user at the same time. When the front camera detects that a user inputs an instruction for rotating the camera, if the head swings up and down and left and right (the detection is carried out through the front camera), when fingers/palms point to the upper, lower, left and right directions, the camera rotates a certain angle respectively corresponding to the upper, lower, left and right directions, the shooting or previewing visual angle is changed, and the deviated angle is recorded.
Further, the step of detecting whether the shooting control instruction is received further comprises the following steps: detecting a preset action for ending shooting; and if the preset action is detected, controlling the first camera and/or the second camera to output image data or video data.
Specifically, as shown in fig. 6, the first camera 5 and the second camera 6 are both rear cameras, and the mobile terminal includes a first front camera 71 and a second front camera 72. The step of detecting a preset action for ending the photographing includes: and respectively controlling the first front camera and the second front camera to carry out preset action detection. If the preset action is detected, the step of controlling the first camera and/or the second camera to output image data or video data comprises the following steps: if the first front-facing camera detects a first preset action in the visible range of the first full-screen picture, controlling the first camera to output image data or video data; and if the second preset action is detected by the second front-facing camera in the visible range of the second full-screen picture, controlling the second camera to output image data or video data. That is to say, when the user performs a photographing or video recording operation, preset actions such as blinking, scissors, smiling photographing and the like are performed, and the current photo or video of the user is cached. For example, when it is detected that the left user a performs a photographing or video recording operation, preset actions such as blinking, scissors, smiling photographing, and the like are performed, and the photo or video of the left user a is cached. When the right user B is detected to be in photographing or video recording operation, preset actions such as blinking, scissors and smiling photographing are executed, and the photos or videos of the right user B are cached.
Further, the step of detecting the preset action for ending the shooting may also be that the user clicks a corresponding function key, and specifically, the step of detecting the preset action for ending the shooting includes: touch operations on a first shooting button in the first full screen picture and a second shooting button in the second full screen picture are detected respectively. If the preset action is detected, the step of controlling the first camera and/or the second camera to output image data or video data comprises the following steps: if a first touch operation on the first shooting button is detected, controlling the first camera to output image data or video data; and if the second touch operation on the second shooting button is detected, controlling the second camera to output image data or video data. That is, when the user performs a photographing or video recording operation, for example, clicking a video recording or photographing button, the current user's picture or video is cached. For example, when it is detected that the left user a performs a photographing or video recording operation, a preset action such as clicking a video recording or photographing button is executed to cache the picture or video of the left user a. When the right user B is detected to have photographing or video recording operation, preset actions such as clicking a video recording button and a photographing button are executed, and the picture or video of the right user B is cached.
The method further comprises the following steps after the step of controlling the first camera and/or the second camera to execute the shooting operation according to the shooting control instruction: image synthesis is carried out on the image data shot by the first camera and the second camera to generate an image; or, video synthesis is carried out on the video data shot by the first camera and the second camera to generate a video. Specifically, according to the recorded deviation angle of the camera, the first data collected by the first camera and the second data collected by the second camera are combined into a left-right wide-angle photo or video in the background. When two users on the left and right sides use different cameras to shoot simultaneously, full-screen preview and shooting pictures corresponding to the two users on the left and right sides can be displayed on a display screen simultaneously, the angle of the camera is adjusted by detecting head rotation, gestures and the like of the corresponding users through the front-facing camera to adjust the shooting angle of view, so that the users at the corresponding left and right angles of view only see the pictures needed by the users and cannot interfere with the pictures of the users at the other angle of view. In addition, after the shooting is finished, the data shot by the two users can be synthesized to generate a large-view-angle photo or video.
The manner of acquiring the shooting control instruction through the front-facing camera is described above, and the manner of controlling the shooting process through voice will be further described below. Specifically, the step of detecting whether the photographing control instruction is received includes: acquiring voice data collected by a microphone; comparing the voice data collected by the microphone with pre-stored reference voice data; and if the voice data collected by the microphone is matched with at least one item of prestored reference voice data, determining that a shooting control instruction is received. Wherein the reference voice data includes: first reference voice data associated with the first camera, and second reference voice data associated with the second camera. That is to say, through gathering the speech information of user's input, control first camera and second camera and carry out angle rotation and shoot the operation. For example, if voice information of the left user a is detected, the first camera is controlled to perform angle rotation and shooting; and when the voice information of the right user B is detected, controlling the second camera to rotate in angle and shoot.
In a specific application process, a user unlocks the display screen and starts the camera, the LEDs on the two sides are controlled to be started simultaneously, and the display screen refreshes and displays a shooting picture of the user. When a plurality of users shoot simultaneously, whether the users click to enter the shared shooting function is judged, if not, the LEDs on the two sides are continuously controlled to be simultaneously started, and data pictures collected by the camera are displayed. If the fact that the user clicks to enter the shared shooting function is detected, previews or shooting pictures of the first camera and the second camera (the left camera and the right camera) are displayed alternately according to a sequence diagram shown in a sequence diagram 4, so that the left shooting user and the right shooting user can watch full-screen previews or shooting pictures required by the user at the same time. Further, if a shooting control instruction is detected, the step of controlling the first camera and/or the second camera to execute shooting operation according to the shooting control instruction comprises: if the voice data collected by the microphone is matched with the first reference voice data, controlling the first camera to execute shooting operation according to the voice data collected by the microphone; and if the voice data collected by the microphone is matched with the second reference voice data, controlling the second camera to execute shooting operation according to the voice data collected by the microphone. When the microphone collects the voice input of the left user A, namely up, down, left and right, the corresponding left camera deflects by certain angles up, down, left and right respectively, and simultaneously records the angle difference of the deflection of the corresponding camera.
Further, before the step of detecting whether the shooting control instruction is received, the method further comprises: respectively displaying first prompt information and second prompt information on a first full-screen picture and a second full-screen picture; acquiring first reference voice information of a first user and second reference voice information of a second user, which are acquired by a microphone; and respectively establishing a first incidence relation between the first camera and the first reference voice information and a second incidence relation between the second camera and the second reference voice information. The first prompt message is used for prompting the first user to input voice information, and the second prompt message is used for prompting the second user to input voice information. That is, a prompt interface is displayed at the left screen view angle, the left user a is prompted to speak, and the voice data of the left user a is recorded through the integrated microphone and stored as the reference voice information. And similarly, a prompt interface is displayed at the viewing angle of the right screen to prompt the right user B to speak, and the voice data of the right user B is recorded by the integrated microphone and matched with the voice data of the right user.
Further, the step of detecting whether the shooting control instruction is received further comprises the following steps: detecting a preset action for ending shooting; and if the preset action is detected, controlling the first camera and/or the second camera to output image data or video data.
Specifically, the step of detecting the preset action for ending the shooting can also be realized by acquiring voice information. When a user carries out photographing or video recording operation, the user inputs voice information such as photographing and video recording through voice, the mobile terminal starts photographing or video recording after acquiring the corresponding voice information, and the current photo or video of the user is cached.
The method further comprises the following steps after the step of controlling the first camera and/or the second camera to execute the shooting operation according to the shooting control instruction: image synthesis is carried out on the image data shot by the first camera and the second camera to generate an image; or, video synthesis is carried out on the video data shot by the first camera and the second camera to generate a video. Specifically, according to the recorded deviation angle of the camera, the first data collected by the first camera and the second data collected by the second camera are combined into a left-right wide-angle photo or video in the background. When two left and right users use different cameras to shoot simultaneously, full-screen preview and shooting pictures corresponding to the left and right users can be displayed on a display screen simultaneously, voice data of the left and right users are recorded, a user voice input camera deflection control signal is recognized, the corresponding cameras deflect at a certain angle, the angle of view for previewing and shooting is adjusted, and the users at the left and right corresponding visual angles only see the pictures required by the users and cannot generate interference on the user picture at the other visual angle. In addition, after the shooting is finished, the data shot by the two users can be synthesized to generate a large-view-angle photo or video.
The above embodiments respectively describe the shooting methods in different scenes in detail, and the mobile terminal corresponding to the above embodiments will be further described with reference to fig. 7 and 8.
As shown in fig. 7, the mobile terminal 700 according to the embodiment of the present invention can obtain first data collected by a first camera and second data collected by a second camera in the foregoing embodiment; according to a preset display frame rate, controlling a display screen to alternately display details of a method for displaying first data and second data, and achieving the same effect, wherein the mobile terminal 700 specifically comprises the display screen, a first camera and a second camera, and the display screen comprises: the backlight source comprises at least two light source components, and each light source component is respectively arranged on different side edges of the light guide plate; wherein, the light that the first light source subassembly in at least two light source subassemblies sent out throws to display panel through the light guide plate to the incident angle that is no less than first angle and forms first full-screen picture, and the visual scope of first full-screen picture is: a light angle range of the light exiting the display panel at an exit angle not less than the first angle; the light emitted by a second light source in the at least two light source components is projected to the display panel through the light guide plate at an incident angle not smaller than a second angle to form a second full screen picture, and the visible range of the second full screen picture is as follows: a light angle range of the light exiting the display panel at an exit angle not less than the second angle; the visible range of the second full screen picture and the visible range of the first full screen picture are not overlapped within a preset angle range. In addition, the mobile terminal 700 further includes the following functional modules:
the first obtaining module 710 is configured to obtain first data collected by a first camera and second data collected by a second camera;
the display module 720 is configured to control the display screen to alternately display the first data and the second data according to a preset display frame rate;
wherein the first data and the second data are image data or video data.
Wherein, the display module 720 includes:
the first display sub-module 721 is configured to turn on the first light source module, turn off the second light source module, and control the display screen to display the first data;
the second display submodule 722 is configured to turn off the first light source module and turn on the second light source module at the same time after a preset time interval is set, and control the display screen to display second data;
the first processing submodule 723 is configured to cyclically execute, after a preset time interval, the steps of turning on the first light source assembly and turning off the second light source assembly at the same time, and control the display screen to display the first data to the preset time interval, turn off the first light source assembly and turn on the second light source assembly at the same time, and control the display screen to display the second data until the screen sharing function is detected to be turned off, or until the first camera or the second camera is detected to be turned off.
Wherein, the mobile terminal 700 further comprises:
a first detecting module 730, configured to detect whether a shooting control instruction is received;
the shooting module 740 is configured to, when a shooting control instruction is detected, control the first camera and/or the second camera to perform a shooting operation according to the shooting control instruction;
wherein the shooting control instruction is associated with the rotation angle and the rotation direction of the first camera and/or the second camera.
The mobile terminal comprises a first camera, a second camera, a front camera and a back camera, wherein the first camera and the second camera are rear cameras with adjustable shooting angles;
the first detection module 730 includes:
the first detection submodule 731 is configured to control at least one front-facing camera to perform control action detection;
the photographing module 740 includes:
and the first shooting sub-module 741 is used for controlling the first camera and/or the second camera to execute shooting operation according to the control action when the control action is detected.
The at least one front camera comprises a first front camera and a second front camera;
the first detection submodule 731 includes:
a first detection unit 7311, configured to control the first front camera and the second front camera to perform control action detection;
the first photographing sub-module 741 includes:
the first shooting unit 7411 is configured to control the first camera to perform rotation shooting according to a first control action when the first front camera detects the first control action within the visible range of the first full-screen picture;
the second shooting unit 7412 is configured to control the second camera to perform rotation shooting according to the second control action when the second front camera detects the second control action within the visible range of the second full-screen picture.
Wherein the control action comprises: at least one of a gesture control action, a head swing control action, and a finger pointing control action.
Wherein, the first detecting module 730 further comprises:
a first obtaining sub-module 732, configured to obtain voice data collected by a microphone;
the comparison submodule 733 is used for comparing the voice data collected by the microphone with pre-stored reference voice data;
the second detection submodule 734 is configured to determine that a shooting control instruction is received when the voice data acquired by the microphone matches at least one of pre-stored reference voice data;
wherein the reference voice data includes: first reference voice data associated with the first camera, and second reference voice data associated with the second camera.
Wherein the photographing module 740 includes:
the second shooting submodule 742 is configured to control the first camera to perform shooting operation according to the voice data acquired by the microphone when the voice data acquired by the microphone matches the first reference voice data;
and a third shooting sub-module 743, configured to control the second camera to perform a shooting operation according to the voice data collected by the microphone when the voice data collected by the microphone matches the second reference voice data.
Wherein, the first detecting module 730 further comprises:
the prompt sub-module 735 is configured to display the first prompt information and the second prompt information on the first full screen picture and the second full screen picture, respectively;
the second obtaining sub-module 736 is configured to obtain the first reference voice information of the first user and the second reference voice information of the second user, which are collected by the microphone;
the second processing submodule 737 is configured to establish a first association relationship between the first camera and the first reference voice information and a second association relationship between the second camera and the second reference voice information, respectively;
the first prompt message is used for prompting the first user to input voice information, and the second prompt message is used for prompting the second user to input voice information.
Wherein, the mobile terminal 700 further comprises:
a second detection module 750 for detecting a preset action for ending the photographing;
the first processing module 760 is configured to control the first camera and/or the second camera to output image data or video data when a preset action is detected.
The mobile terminal comprises a first camera, a second camera, a first camera and a second camera, wherein the first camera and the second camera are rear cameras;
the second detection module 750 includes:
the third detection submodule 751 is used for controlling the first front camera and the second front camera to perform preset action detection respectively;
the first processing module 760 includes:
the third processing sub-module 761 is configured to control the first camera to output image data or video data when the first front-facing camera detects a first preset action within a visible range of the first full-screen picture;
the fourth processing submodule 762 is configured to, when the second front-facing camera detects a second preset action within the visible range of the second full-screen picture, control the second camera to output image data or video data.
Wherein the second detection module 750 includes:
a fourth detection submodule 752, configured to detect touch operations on the first shooting button in the first full-screen picture and the second shooting button in the second full-screen picture, respectively;
the first processing module 760 includes:
a fifth processing submodule 763, configured to control the first camera to output image data or video data when the first touch operation on the first shooting button is detected;
the sixth processing sub-module 764 is configured to control the second camera to output image data or video data when the second touch operation on the second shooting button is detected.
Wherein, the mobile terminal 700 further comprises:
the first synthesis module 770 is configured to perform image synthesis on image data captured by the first camera and the second camera to generate one image;
or,
the second synthesizing module 780 is configured to perform video synthesis on the video data captured by the first camera and the second camera to generate a video.
Wherein, the mobile terminal 700 further comprises:
a third detecting module 790, configured to detect a screen sharing function and an open state of the first camera and/or the second camera;
the second processing module 7100 is configured to, when it is detected that the screen sharing function is turned on and both the first camera and the second camera are turned on, obtain first data acquired by the first camera and second data acquired by the second camera.
Wherein, the mobile terminal 700 further comprises:
and the third processing module 7110 is configured to, when it is detected that the screen sharing function is turned off, or it is detected that the first camera or the second camera is turned off, turn on all light source modules in the backlight source.
It is to be noted that the mobile terminal according to the embodiment of the present invention is a mobile terminal corresponding to the above shooting method, and both the implementation manner and the achieved technical effect of the above method are applicable to the embodiment of the mobile terminal. The display screen of the mobile terminal can form two full-screen pictures which are not mutually influenced at different visual angles, when a plurality of users shoot simultaneously, shooting contents of different users can be displayed on the display screen alternately, so that the users can see the full-screen pictures shot by themselves in respective visual angles, and the mutual influence between the two pictures is avoided while the multi-user shared shooting and display are realized.
In order to better achieve the above object, an embodiment of the present invention further provides a terminal, which includes a processor, a memory, and a computer program stored in the memory and running on the processor, and when the processor executes the computer program, the steps in the shooting method described above are implemented. An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the shooting method described above.
Fig. 9 is a schematic structural diagram of a mobile terminal according to another embodiment of the present invention. Specifically, the mobile terminal 900 in fig. 9 may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), or a vehicle-mounted computer.
The mobile terminal 900 in fig. 9 includes a power supply 910, a memory 920, an input unit 930, a display unit 940, a photographing component including a first camera 951 and a second camera 952, a processor 960, a wifi (wireless fidelity) module 970, an audio circuit 980, and an RF circuit 990. In addition, the mobile terminal further includes: display screen, its characterized in that, the display screen includes: the backlight source comprises at least two light source components, and each light source component is respectively arranged on different side edges of the light guide plate; wherein, the light that the first light source subassembly in at least two light source subassemblies sent out throws to display panel through the light guide plate to the incident angle that is no less than first angle and forms first full-screen picture, and the visual scope of first full-screen picture is: a light angle range of the light exiting the display panel at an exit angle not less than the first angle; the light emitted by a second light source in the at least two light source components is projected to the display panel through the light guide plate at an incident angle not smaller than a second angle to form a second full screen picture, and the visible range of the second full screen picture is as follows: a light angle range of the light exiting the display panel at an exit angle not less than the second angle; the visible range of the second full screen picture and the visible range of the first full screen picture are not overlapped within a preset angle range.
The input unit 930 may be used, among other things, to receive user-input information and to generate signal inputs related to user settings and function control of the mobile terminal 900. Specifically, in the embodiment of the present invention, the input unit 930 may include a touch panel 931. The touch panel 931, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 931 (for example, a user may operate the touch panel 931 by using a finger, a stylus pen, or any other suitable object or accessory), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 931 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 960, where the touch controller can receive and execute commands sent by the processor 960. In addition, the touch panel 931 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 931, the input unit 930 may also include other input devices 932, and the other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Among them, the display unit 940 may be used to display information input by the user or information provided to the user and various menu interfaces of the mobile terminal. The display unit 940 may include a display panel 941, and the display panel 941 may be optionally configured in the form of an LCD or an Organic Light-Emitting Diode (OLED).
It should be noted that the touch panel 931 may overlay the display panel 941 to form a touch display screen, and when the touch display screen detects a touch operation on or near the touch display screen, the touch display screen transmits the touch operation to the processor 960 to determine the type of the touch event, and then the processor 960 provides a corresponding visual output on the touch display screen according to the type of the touch event.
The touch display screen comprises an application program interface display area and a common control display area. The arrangement modes of the application program interface display area and the common control display area are not limited, and can be an arrangement mode which can distinguish two display areas, such as vertical arrangement, left-right arrangement and the like. The application interface display area may be used to display an interface of an application. Each interface may contain at least one interface element such as an icon and/or widget desktop control for an application. The application interface display area may also be an empty interface that does not contain any content. The common control display area is used for displaying controls with high utilization rate, such as application icons like setting buttons, interface numbers, scroll bars, phone book icons and the like.
The processor 960 is a control center of the mobile terminal, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the first memory 921 and calling data stored in the second memory 922, thereby performing overall monitoring of the mobile terminal. Optionally, processor 960 may include one or more processing units.
In this embodiment of the present invention, the mobile terminal 900 further includes: the computer program stored in the memory 920 and executable on the processor 960, in particular, by invoking software programs and/or modules stored in the first memory 921 and/or data stored in the second memory 922, the processor 960 performs the following steps: acquiring first data acquired by a first camera and second data acquired by a second camera;
controlling a display screen to alternately display first data and second data according to a preset display frame rate;
wherein the first data and the second data are image data or video data.
In particular, the computer program when executed by the processor 960 realizes the following steps: turning on the first light source component, turning off the second light source component at the same time, and controlling the display screen to display first data;
after a preset time interval, closing the first light source component, simultaneously opening the second light source component, and controlling the display screen to display second data;
after the preset time interval, the first light source component is started and the second light source component is closed circularly, the step that the display screen displays the first data is controlled to reach the preset time interval, the first light source component is closed, the second light source component is started simultaneously, and the step that the display screen displays the second data is controlled until the screen sharing function is detected to be closed, or the first camera or the second camera is detected to be closed.
Specifically, the first camera and the second camera are both rear cameras, and the mobile terminal further comprises at least one front camera; the computer program when executed by the processor 960 performs the steps of: detecting whether a shooting control instruction is received or not;
if the shooting control instruction is detected, controlling the first camera and/or the second camera to execute shooting operation according to the shooting control instruction;
wherein the shooting control instruction is associated with the rotation angle and the rotation direction of the first camera and/or the second camera.
Further, the computer program when executed by the processor 960 realizes the following steps: controlling at least one front camera to detect control actions;
and if the control action is detected, controlling the first camera and/or the second camera to execute shooting operation according to the control action.
The at least one front camera comprises a first front camera and a second front camera; the computer program when executed by the processor 960 performs the steps of: respectively controlling the first front camera and the second front camera to perform control action detection;
if the first front-facing camera detects a first control action in the visible range of the first full-screen picture, controlling the first camera to execute shooting operation according to the first control action;
and if the second front-facing camera detects a second control action in the visible range of the second full-screen picture, controlling the second camera to execute shooting operation according to the second control action.
Wherein the control action comprises: at least one of a gesture control action, a head swing control action, and a finger pointing control action.
In particular, the computer program when executed by the processor 960 realizes the following steps: acquiring voice data collected by a microphone;
comparing the voice data collected by the microphone with pre-stored reference voice data;
if the voice data collected by the microphone is matched with at least one item of prestored reference voice data, determining that a shooting control instruction is received;
wherein the reference voice data includes: first reference voice data associated with the first camera, and second reference voice data associated with the second camera.
In particular, the computer program when executed by the processor 960 realizes the following steps: if the voice data collected by the microphone is matched with the first reference voice data, controlling the first camera to execute shooting operation according to the voice data collected by the microphone;
and if the voice data collected by the microphone is matched with the second reference voice data, controlling the second camera to execute shooting operation according to the voice data collected by the microphone.
In particular, the computer program when executed by the processor 960 realizes the following steps: respectively displaying first prompt information and second prompt information on a first full-screen picture and a second full-screen picture;
acquiring first reference voice information of a first user and second reference voice information of a second user, which are acquired by a microphone;
respectively establishing a first incidence relation between a first camera and first reference voice information and a second incidence relation between a second camera and second reference voice information;
the first prompt message is used for prompting the first user to input voice information, and the second prompt message is used for prompting the second user to input voice information.
In particular, the computer program when executed by the processor 960 realizes the following steps: detecting a preset action for ending shooting;
and if the preset action is detected, controlling the first camera and/or the second camera to output image data or video data.
In particular, the computer program when executed by the processor 960 realizes the following steps: respectively controlling the first front camera and the second front camera to carry out preset action detection;
if the first front-facing camera detects a first preset action in the visible range of the first full-screen picture, controlling the first camera to output image data or video data;
and if the second preset action is detected by the second front-facing camera in the visible range of the second full-screen picture, controlling the second camera to output image data or video data.
In particular, the computer program when executed by the processor 960 realizes the following steps: respectively detecting touch operations on a first shooting button in a first full-screen picture and a second shooting button in a second full-screen picture;
if a first touch operation on the first shooting button is detected, controlling the first camera to output image data or video data;
and if the second touch operation on the second shooting button is detected, controlling the second camera to output image data or video data.
In particular, the computer program when executed by the processor 960 realizes the following steps: image synthesis is carried out on the image data shot by the first camera and the second camera to generate an image;
or,
and carrying out video synthesis on the video data shot by the first camera and the second camera to generate a video.
In particular, the computer program when executed by the processor 960 realizes the following steps: detecting a screen sharing function and an opening state of a first camera and/or a second camera;
if the screen sharing function is started and the first camera and the second camera are both started, first data collected by the first camera and second data collected by the second camera are acquired.
In particular, the computer program when executed by the processor 960 realizes the following steps: and if the screen sharing function is detected to be closed, or the first camera or the second camera is detected to be closed, all light source assemblies in the backlight source are started.
The display screen of the mobile terminal 900 of the embodiment of the invention can form two full-screen pictures which are not mutually influenced at different visual angles, and when a plurality of users shoot simultaneously, shooting contents of different users can be alternately displayed on the display screen, so that the users can see the self-shot full-screen pictures at respective visual angles, and the mutual influence between the two pictures is ensured while realizing multi-user shared shooting and display.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
While the preferred embodiments of the present invention have been described, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.

Claims (26)

1. A shooting method is applied to a mobile terminal, the mobile terminal comprises a display screen, a first camera and a second camera, and the mobile terminal is characterized in that the display screen comprises: the backlight source comprises at least two light source components, and each light source component is respectively arranged on different side edges of the light guide plate; the light emitted by a first light source assembly of the at least two light source assemblies is projected to the display panel through the light guide plate at an incident angle not smaller than a first angle to form a first full screen image, and the visible range of the first full screen image is as follows: a range of angles of light exiting the display panel at an exit angle not less than the first angle; the light emitted by a second light source in the at least two light source assemblies is projected to the display panel through the light guide plate at an incident angle not smaller than a second angle to form a second full screen image, and the visible range of the second full screen image is as follows: a range of angles of light exiting the display panel at an exit angle not less than the second angle; the visible range of the second full screen image and the visible range of the first full screen image are not overlapped within a preset angle range;
the shooting method comprises the following steps:
acquiring first data acquired by a first camera and second data acquired by a second camera;
controlling the display screen to alternately display the first data and the second data according to a preset display frame rate;
wherein the first data and the second data are image data or video data;
after the step of controlling the display screen to alternately display the first data and the second data according to the preset display frame rate, the method further includes:
detecting whether a shooting control instruction is received or not;
if a shooting control instruction is detected, controlling the first camera and/or the second camera to execute shooting operation according to the shooting control instruction;
wherein the shooting control instruction is associated with a rotation angle and a rotation direction of the first camera and/or the second camera;
the first camera and the second camera are rear cameras with adjustable shooting angles, and the mobile terminal further comprises at least one front camera;
the step of detecting whether a shooting control instruction is received includes:
controlling the at least one front camera to perform control action detection;
if the shooting control instruction is detected, controlling the first camera and/or the second camera to execute the shooting operation according to the shooting control instruction, wherein the step comprises the following steps:
if the control action is detected, controlling the first camera and/or the second camera to execute shooting operation according to the control action;
the at least one front camera comprises a first front camera and a second front camera;
the step of controlling the at least one front camera to detect the control action comprises the following steps:
respectively controlling the first front camera and the second front camera to perform control action detection;
if the control action is detected, controlling the first camera and/or the second camera to execute shooting operation according to the control action, wherein the step comprises the following steps:
if the first front-facing camera detects a first control action in the visible range of the first full-screen picture, controlling the first camera to carry out rotary shooting according to the first control action;
and if the second front-facing camera detects a second control action in the visible range of the second full-screen picture, controlling the second camera to carry out rotary shooting according to the second control action.
2. The shooting method according to claim 1, wherein the step of controlling the display screen to alternately display the first data and the second data according to a preset display frame rate comprises:
turning on the first light source component, turning off the second light source component at the same time, and controlling the display screen to display the first data;
after a preset time interval, closing the first light source component, simultaneously opening the second light source component, and controlling the display screen to display the second data;
and after a preset time interval, circularly executing the steps of opening the first light source component and closing the second light source component, controlling the display screen to display the first data until the preset time interval, closing the first light source component and opening the second light source component simultaneously, and controlling the display screen to display the second data until the screen sharing function is detected to be closed, or until the first camera or the second camera is detected to be closed.
3. The photographing method according to claim 1, wherein the control action includes: at least one of a gesture control action, a head swing control action, and a finger pointing control action.
4. The photographing method according to claim 1, wherein the step of detecting whether a photographing control instruction is received includes:
acquiring voice data collected by a microphone;
comparing the voice data collected by the microphone with pre-stored reference voice data;
if the voice data collected by the microphone is matched with at least one item of prestored reference voice data, determining that a shooting control instruction is received;
wherein the reference voice data includes: first reference voice data associated with the first camera, and second reference voice data associated with the second camera.
5. The shooting method according to claim 4, wherein the step of controlling the first camera and/or the second camera to execute the shooting operation according to the shooting control instruction if the shooting control instruction is detected comprises:
if the voice data collected by the microphone is matched with the first reference voice data, controlling the first camera to execute shooting operation according to the voice data collected by the microphone;
and if the voice data collected by the microphone is matched with the second reference voice data, controlling the second camera to execute shooting operation according to the voice data collected by the microphone.
6. The shooting method according to claim 4, wherein before the step of detecting whether the shooting control instruction is received, the method further comprises:
respectively displaying first prompt information and second prompt information on the first full screen picture and the second full screen picture;
acquiring first reference voice information of a first user and second reference voice information of a second user, which are acquired by a microphone;
respectively establishing a first incidence relation between the first camera and the first reference voice information and a second incidence relation between the second camera and the second reference voice information;
the first prompt message is used for prompting a first user to input voice information, and the second prompt message is used for prompting a second user to input voice information.
7. The shooting method according to claim 1, wherein after the step of detecting whether the shooting control instruction is received, the method further comprises:
detecting a preset action for ending shooting;
and if the preset action is detected, controlling the first camera and/or the second camera to output image data or video data.
8. The shooting method according to claim 7, wherein the first camera and the second camera are both rear cameras, and the mobile terminal comprises a first front camera and a second front camera;
the step of detecting a preset action for ending the photographing includes:
respectively controlling the first front camera and the second front camera to carry out preset action detection;
if the preset action is detected, the step of controlling the first camera and/or the second camera to output image data or video data comprises the following steps:
if the first front-facing camera detects a first preset action in the visible range of the first full-screen picture, controlling the first camera to output image data or video data;
and if the second front-facing camera detects a second preset action in the visible range of the second full-screen picture, controlling the second camera to output image data or video data.
9. The photographing method according to claim 7, wherein the step of detecting a preset action for ending photographing includes:
respectively detecting touch operations on a first shooting button in the first full screen picture and a second shooting button in the second full screen picture;
if the preset action is detected, the step of controlling the first camera and/or the second camera to output image data or video data comprises the following steps:
if a first touch operation on the first shooting button is detected, controlling the first camera to output image data or video data;
and if the second touch operation on the second shooting button is detected, controlling the second camera to output image data or video data.
10. The shooting method according to claim 1, wherein after the step of controlling the first camera and/or the second camera to perform shooting operations according to the shooting control instruction, the shooting method further comprises:
image synthesis is carried out on the image data shot by the first camera and the second camera to generate an image;
or,
and carrying out video synthesis on the video data shot by the first camera and the second camera to generate a video.
11. The shooting method according to claim 1, wherein the step of acquiring the first data collected by the first camera and the second data collected by the second camera is preceded by the step of:
detecting a screen sharing function and the starting state of the first camera and/or the second camera;
if the screen sharing function is detected to be started, and the first camera and the second camera are both started, acquiring first data acquired by the first camera and second data acquired by the second camera.
12. The photographing method according to claim 11, wherein after the step of detecting the on state of the screen sharing function, the first camera, and/or the second camera, further comprising:
and if the screen sharing function is detected to be closed, or the first camera or the second camera is detected to be closed, starting all light source assemblies in the backlight source.
13. The utility model provides a mobile terminal, mobile terminal includes display screen, first camera and second camera, its characterized in that, the display screen includes: the backlight source comprises at least two light source components, and each light source component is respectively arranged on different side edges of the light guide plate; the light emitted by a first light source assembly of the at least two light source assemblies is projected to the display panel through the light guide plate at an incident angle not smaller than a first angle to form a first full screen image, and the visible range of the first full screen image is as follows: a range of angles of light exiting the display panel at an exit angle not less than the first angle; the light emitted by a second light source in the at least two light source assemblies is projected to the display panel through the light guide plate at an incident angle not smaller than a second angle to form a second full screen image, and the visible range of the second full screen image is as follows: a range of angles of light exiting the display panel at an exit angle not less than the second angle; the visible range of the second full screen image and the visible range of the first full screen image are not overlapped within a preset angle range;
wherein the mobile terminal further comprises:
the first acquisition module is used for acquiring first data acquired by the first camera and second data acquired by the second camera;
the display module is used for controlling the display screen to alternately display the first data and the second data according to a preset display frame rate;
wherein the first data and the second data are image data or video data;
the mobile terminal further includes:
the first detection module is used for detecting whether a shooting control instruction is received or not;
the shooting module is used for controlling the first camera and/or the second camera to execute shooting operation according to the shooting control instruction when the shooting control instruction is detected;
wherein the shooting control instruction is associated with a rotation angle and a rotation direction of the first camera and/or the second camera;
the first camera and the second camera are rear cameras with adjustable shooting angles, and the mobile terminal further comprises at least one front camera;
the first detection module includes:
the first detection submodule is used for controlling the at least one front-facing camera to detect control actions;
the photographing module includes:
the first shooting submodule is used for controlling the first camera and/or the second camera to execute shooting operation according to the control action when the control action is detected;
the at least one front camera comprises a first front camera and a second front camera;
the first detection submodule includes:
the first detection unit is used for respectively controlling the first front camera and the second front camera to carry out control action detection;
the first photographing sub-module includes:
the first shooting unit is used for controlling the first camera to carry out rotary shooting according to a first control action when the first front camera detects the first control action in the visible range of the first full-screen picture;
and the second shooting unit is used for controlling the second camera to carry out rotary shooting according to a second control action when the second front camera detects the second control action in the visible range of the second full-screen picture.
14. The mobile terminal of claim 13, wherein the display module comprises:
the first display sub-module is used for turning on the first light source component, turning off the second light source component and controlling the display screen to display the first data;
the second display submodule is used for closing the first light source assembly and simultaneously opening the second light source assembly after a preset time interval is set, and controlling the display screen to display the second data;
and the first processing submodule is used for circularly executing the steps of opening the first light source assembly and closing the second light source assembly at the same time after a preset time interval, controlling the step of displaying the first data by the display screen to the step of closing the first light source assembly and opening the second light source assembly at the same time and controlling the step of displaying the second data by the display screen until the screen sharing function is detected to be closed, or until the first camera or the second camera is detected to be closed.
15. The mobile terminal of claim 13, wherein the control action comprises: at least one of a gesture control action, a head swing control action, and a finger pointing control action.
16. The mobile terminal of claim 13, wherein the first detection module further comprises:
the first acquisition submodule is used for acquiring voice data acquired by a microphone;
the comparison submodule is used for comparing the voice data collected by the microphone with prestored reference voice data;
the second detection submodule is used for determining that a shooting control instruction is received when the voice data collected by the microphone is matched with at least one item of pre-stored reference voice data;
wherein the reference voice data includes: first reference voice data associated with the first camera, and second reference voice data associated with the second camera.
17. The mobile terminal of claim 16, wherein the photographing module comprises:
the second shooting submodule is used for controlling the first camera to execute shooting operation according to the voice data collected by the microphone when the voice data collected by the microphone is matched with the first reference voice data;
and the third shooting submodule is used for controlling the second camera to execute shooting operation according to the voice data collected by the microphone when the voice data collected by the microphone is matched with the second reference voice data.
18. The mobile terminal of claim 16, wherein the first detection module further comprises:
the prompting sub-module is used for displaying first prompting information and second prompting information on the first full-screen picture and the second full-screen picture respectively;
the second acquisition submodule is used for acquiring the first reference voice information of the first user and the second reference voice information of the second user, which are acquired by the microphone;
the second processing submodule is used for respectively establishing a first incidence relation between the first camera and the first reference voice information and a second incidence relation between the second camera and the second reference voice information;
the first prompt message is used for prompting a first user to input voice information, and the second prompt message is used for prompting a second user to input voice information.
19. The mobile terminal of claim 13, wherein the mobile terminal further comprises:
the second detection module is used for detecting a preset action for finishing shooting;
and the first processing module is used for controlling the first camera and/or the second camera to output image data or video data when the preset action is detected.
20. The mobile terminal of claim 19, wherein the first camera and the second camera are both rear-facing cameras, and wherein the mobile terminal comprises a first front-facing camera and a second front-facing camera;
the second detection module includes:
the third detection submodule is used for respectively controlling the first front camera and the second front camera to carry out preset action detection;
the first processing module comprises:
the third processing submodule is used for controlling the first camera to output image data or video data when the first front-facing camera detects a first preset action in the visible range of the first full-screen picture;
and the fourth processing submodule is used for controlling the second camera to output image data or video data when the second front-facing camera detects a second preset action in the visible range of the second full-screen picture.
21. The mobile terminal of claim 19, wherein the second detection module comprises:
the fourth detection submodule is used for respectively detecting touch operations on the first shooting button in the first full-screen picture and the second shooting button in the second full-screen picture;
the first processing module comprises:
the fifth processing submodule is used for controlling the first camera to output image data or video data when the first touch operation on the first shooting button is detected;
and the sixth processing submodule is used for controlling the second camera to output image data or video data when the second touch operation on the second shooting button is detected.
22. The mobile terminal of claim 13, wherein the mobile terminal further comprises:
the first synthesis module is used for carrying out image synthesis on the image data shot by the first camera and the second camera to generate an image;
or,
and the second synthesis module is used for carrying out video synthesis on the video data shot by the first camera and the second camera to generate a video.
23. The mobile terminal of claim 13, wherein the mobile terminal further comprises:
the third detection module is used for detecting a screen sharing function and the starting state of the first camera and/or the second camera;
and the second processing module is used for acquiring the first data acquired by the first camera and the second data acquired by the second camera when the screen sharing function is started and the first camera and the second camera are both started.
24. The mobile terminal of claim 23, wherein the mobile terminal further comprises:
and the third processing module is used for turning on all light source components in the backlight source when the screen sharing function is detected to be turned off or the first camera or the second camera is detected to be turned off.
25. A mobile terminal, characterized in that it comprises a processor, a memory, a computer program stored on said memory and executable on said processor, said processor implementing the steps of the shooting method according to any one of claims 1 to 12 when executing said computer program.
26. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the photographing method according to any one of claims 1 to 12.
CN201710735010.3A 2017-08-24 2017-08-24 A kind of image pickup method, mobile terminal and computer readable storage medium Active CN107347140B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710735010.3A CN107347140B (en) 2017-08-24 2017-08-24 A kind of image pickup method, mobile terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710735010.3A CN107347140B (en) 2017-08-24 2017-08-24 A kind of image pickup method, mobile terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN107347140A CN107347140A (en) 2017-11-14
CN107347140B true CN107347140B (en) 2019-10-15

Family

ID=60258298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710735010.3A Active CN107347140B (en) 2017-08-24 2017-08-24 A kind of image pickup method, mobile terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN107347140B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107798260B (en) * 2017-11-24 2019-11-15 维沃移动通信有限公司 A kind of display control method and mobile terminal
CN109065006A (en) * 2018-08-24 2018-12-21 北京康哥教育科技有限公司 A kind of LED formula intelligence small drum system and implementation method
CN110334568B (en) * 2019-03-30 2022-09-16 深圳市晓舟科技有限公司 Track generation and monitoring method, device, equipment and storage medium
CN111711748B (en) * 2020-05-27 2023-01-24 维沃移动通信(杭州)有限公司 Control method and device for screen refresh rate, electronic equipment and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100376924C (en) * 2002-09-19 2008-03-26 三菱电机株式会社 Display unit and electronic apparatus equipped with display unit
CN101266362A (en) * 2008-04-23 2008-09-17 友达光电股份有限公司 Multi- view angle LCD and driving method thereof
CN101750772A (en) * 2008-12-11 2010-06-23 嘉威光电股份有限公司 Multi-screen display method
CN102614662A (en) * 2011-01-30 2012-08-01 德信互动科技(北京)有限公司 Game implementation system
EP2475180A3 (en) * 2010-11-12 2014-06-11 Samsung Electronics Co., Ltd. Image providing apparatus and image providing method based on user's location
CN105373224A (en) * 2015-10-22 2016-03-02 山东大学 Hybrid implementation game system based on pervasive computing, and method thereof
CN105959554A (en) * 2016-06-01 2016-09-21 努比亚技术有限公司 Video shooting apparatus and method
CN106231175A (en) * 2016-07-09 2016-12-14 东莞市华睿电子科技有限公司 A kind of method that terminal gesture is taken pictures

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201041392A (en) * 2009-05-05 2010-11-16 Unique Instr Co Ltd Multi-view 3D video conference device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100376924C (en) * 2002-09-19 2008-03-26 三菱电机株式会社 Display unit and electronic apparatus equipped with display unit
CN101266362A (en) * 2008-04-23 2008-09-17 友达光电股份有限公司 Multi- view angle LCD and driving method thereof
CN101750772A (en) * 2008-12-11 2010-06-23 嘉威光电股份有限公司 Multi-screen display method
EP2475180A3 (en) * 2010-11-12 2014-06-11 Samsung Electronics Co., Ltd. Image providing apparatus and image providing method based on user's location
CN102614662A (en) * 2011-01-30 2012-08-01 德信互动科技(北京)有限公司 Game implementation system
CN105373224A (en) * 2015-10-22 2016-03-02 山东大学 Hybrid implementation game system based on pervasive computing, and method thereof
CN105959554A (en) * 2016-06-01 2016-09-21 努比亚技术有限公司 Video shooting apparatus and method
CN106231175A (en) * 2016-07-09 2016-12-14 东莞市华睿电子科技有限公司 A kind of method that terminal gesture is taken pictures

Also Published As

Publication number Publication date
CN107347140A (en) 2017-11-14

Similar Documents

Publication Publication Date Title
CN107333047B (en) Shooting method, mobile terminal and computer readable storage medium
US11128802B2 (en) Photographing method and mobile terminal
US10416789B2 (en) Automatic selection of a wireless connectivity protocol for an input device
CN107528938B (en) Video call method, terminal and computer readable storage medium
US9360965B2 (en) Combined touch input and offset non-touch gesture
CN107347140B (en) A kind of image pickup method, mobile terminal and computer readable storage medium
CN109791437B (en) Display device and control method thereof
US8532346B2 (en) Device, method and computer program product
GB2481714A (en) Performing an event with respect to the combination of user interface components
CN101727245A (en) Multi-touch positioning method and multi-touch screen
EP3077867A1 (en) Optical head mounted display, television portal module and methods for controlling graphical user interface
US10474324B2 (en) Uninterruptable overlay on a display
JP2013254389A (en) Display control device and method for controlling the same
CN112911147B (en) Display control method, display control device and electronic equipment
CN106406535B (en) A kind of mobile device operation method, apparatus and mobile device
CN105353829A (en) Electronic device
CN110858860A (en) Electronic device control responsive to finger rotation on a fingerprint sensor and corresponding method
US11863855B2 (en) Terminal device and image capturing method
CN107396151A (en) A kind of video playing control method and electronic equipment
CN103543825B (en) Camera cursor system
CN107317994B (en) Video call method and electronic equipment
WO2023005908A1 (en) Photographing method and apparatus, device, and storage medium
US20200257396A1 (en) Electronic device and control method therefor
CN104965699A (en) Information processing method and electronic device
US20230254555A1 (en) Electronic apparatus, control method therefor, and computer-readable storage medium storing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant