CN107333047B - Shooting method, mobile terminal and computer readable storage medium - Google Patents

Shooting method, mobile terminal and computer readable storage medium Download PDF

Info

Publication number
CN107333047B
CN107333047B CN201710734744.XA CN201710734744A CN107333047B CN 107333047 B CN107333047 B CN 107333047B CN 201710734744 A CN201710734744 A CN 201710734744A CN 107333047 B CN107333047 B CN 107333047B
Authority
CN
China
Prior art keywords
front camera
display
shooting
face
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710734744.XA
Other languages
Chinese (zh)
Other versions
CN107333047A (en
Inventor
张胜利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201710734744.XA priority Critical patent/CN107333047B/en
Publication of CN107333047A publication Critical patent/CN107333047A/en
Application granted granted Critical
Publication of CN107333047B publication Critical patent/CN107333047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Abstract

The invention discloses a shooting method, a mobile terminal and a computer readable storage medium, wherein the method comprises the following steps: acquiring first face data acquired by a first front camera and second face data acquired by a second front camera; controlling a display screen to alternately display first face data and second face data according to a preset display frame rate; the first face data and the second face data are image data or video data including faces. The display screen of the mobile terminal can form two full-screen pictures which are not mutually influenced at different visual angles, and when a plurality of users simultaneously take pictures, self-photographing contents of different users can be alternately displayed on the display screen, so that the users can see the full-screen pictures which are respectively taken pictures by themselves in respective visual angles, and the mutual influence between the two pictures is ensured while realizing multi-user self-photographing sharing and display sharing.

Description

Shooting method, mobile terminal and computer readable storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a shooting method, a mobile terminal, and a computer-readable storage medium.
Background
The display screen is the most important human-computer interaction medium of various intelligent mobile terminals at present, and most of the mobile terminals transmit various information such as videos, games, character reading, pictures and the like through the display screen. With the wide application of mobile license plate equipment, more and more mobile terminals are equipped with a shooting function, and some mobile terminals are even equipped with two or more cameras, so as to meet the shooting requirements of users. In particular, as the demand of the user for the self-timer is higher, more and more mobile terminals are configured with front-facing double-shot, but the mobile terminals are configured with a plurality of front-facing cameras, but the display screen can only display the same full-screen picture, or when two different users want to view respective self-timer contents respectively, the current display screen can only realize that the screen is divided into two halves or two areas, but the size of the picture viewed by the user is reduced, and some interference and influence of the displayed contents beside can be felt.
Disclosure of Invention
The embodiment of the invention provides a shooting method, a mobile terminal and a computer readable storage medium, which aim to solve the problems that in the prior art, when different users watch self-shooting contents simultaneously, the picture size is small and the displayed contents are interfered.
In a first aspect, an embodiment of the present invention provides a shooting method, which is applied to a mobile terminal, where the mobile terminal includes a display screen, a first front camera, and a second front camera, and the display screen includes: the backlight source comprises at least two light source components, and each light source component is respectively arranged on different side edges of the light guide plate; wherein, the light that the first light source subassembly in at least two light source subassemblies sent out throws to display panel through the light guide plate to the incident angle that is no less than first angle and forms first full-screen picture, and the visual scope of first full-screen picture is: a light angle range of the light exiting the display panel at an exit angle not less than the first angle; the light emitted by a second light source in the at least two light source components is projected to the display panel through the light guide plate at an incident angle not smaller than a second angle to form a second full screen picture, and the visible range of the second full screen picture is as follows: a light angle range of the light exiting the display panel at an exit angle not less than the second angle; the visible range of the second full screen picture and the visible range of the first full screen picture are not overlapped within a preset angle range;
the shooting method comprises the following steps:
acquiring first face data acquired by a first front camera and second face data acquired by a second front camera;
controlling a display screen to alternately display first face data and second face data according to a preset display frame rate;
the first face data and the second face data are image data or video data including faces.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes a display screen, a first front camera, and a second front camera, and the display screen includes: the backlight source comprises at least two light source components, and each light source component is respectively arranged on different side edges of the light guide plate; wherein, the light that the first light source subassembly in at least two light source subassemblies sent out throws to display panel through the light guide plate to the incident angle that is no less than first angle and forms first full-screen picture, and the visual scope of first full-screen picture is: a light angle range of the light exiting the display panel at an exit angle not less than the first angle; the light emitted by a second light source in the at least two light source components is projected to the display panel through the light guide plate at an incident angle not smaller than a second angle to form a second full screen picture, and the visible range of the second full screen picture is as follows: a light angle range of the light exiting the display panel at an exit angle not less than the second angle; the visible range of the second full screen picture and the visible range of the first full screen picture are not overlapped within a preset angle range;
wherein, mobile terminal still includes:
the first acquisition module is used for acquiring first face data acquired by the first front camera and second face data acquired by the second front camera;
the display module is used for controlling the display screen to alternately display the first face data and the second face data according to a preset display frame rate;
the first face data and the second face data are image data or video data including faces.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes a processor, a memory, and a computer program stored in the memory and operable on the processor, and the processor implements the steps of the shooting method when executing the computer program.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the shooting method described above.
In the shooting method, the mobile terminal and the computer readable storage medium in the embodiment of the invention, the display screen of the mobile terminal can form two full-screen pictures which are not mutually influenced at different visual angles, and when a plurality of users take a picture at the same time, self-shooting contents of different users can be alternately displayed on the display screen, so that the users can see the full-screen pictures of respective self-shooting in respective visual angles, and the mutual influence between the two pictures is ensured while realizing multi-user self-shooting sharing and display sharing.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a first schematic view of a display screen according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a display screen in an embodiment of the invention;
FIG. 3 is a flow chart illustrating a photographing method according to an embodiment of the present invention;
FIG. 4 is a timing diagram illustrating a photographing method according to an embodiment of the invention;
fig. 5 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 6 is a first schematic block diagram of a mobile terminal according to an embodiment of the present invention;
fig. 7 is a schematic block diagram of a mobile terminal according to an embodiment of the present invention;
fig. 8 shows a block diagram of a mobile terminal according to an embodiment of the invention.
The device comprises a display panel 1, a display panel 2, a backlight source 3, a light guide plate 4, a non-light-transmitting lampshade 5, a first front camera 6 and a second front camera;
21. a light source assembly;
31. a light guide surface.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The shooting method of the embodiment of the invention is applied to a mobile terminal, the mobile terminal comprises a first front camera, a second front camera and a display screen, and specifically, as shown in fig. 1, the display screen specifically comprises: the backlight module comprises a display panel 1, a backlight 2 and a light guide plate 3, wherein the backlight 2 comprises at least two light source assemblies 21, and each light source assembly 21 is arranged on different sides of the light guide plate 3.
Specifically, light emitted by a first light source assembly 21 of the at least two light source assemblies 21 is projected to the display panel 1 through the light guide plate 3 at an incident angle not smaller than a first angle to form a first full screen image; the light emitted by the second light source assembly of the at least two light source assemblies 21 is projected to the display panel 1 through the light guide plate 3 at an incident angle not smaller than the second angle to form a second full screen image. Wherein, the visible range of the first full screen picture is as follows: the first light source assembly emits the light ray angle range of the display panel 1 at an exit angle not smaller than the first angle, and the visible range of the second full screen picture is as follows: the second light source assembly emits light out of the display panel 1 at an exit angle not less than the second angle. Therefore, when the users share the screen, the users can respectively see the respective full-screen pictures in the own visual angle range. Furthermore, the visible range of the second full-screen image and the visible range of the first full-screen image are not overlapped within a preset angle range, so that the images among different users are not interfered and influenced, and the user experience is improved.
Further, at least two sets of light guide surfaces 31 with different inclination angles are disposed on one side of the light guide plate 3 away from the display panel 1. Wherein, the inclination of every group leaded light face 31 is the same, and a light source subassembly 21 corresponds a set of leaded light face 31, and the effect of leaded light face 31 is the direction of projection that will guide the light that corresponds light source subassembly 21 and send, and specifically, leaded light face 31 can be thrown away with fixed refraction angle with the light of throwing to self.
Specifically, the first light source assembly and the second light source assembly are disposed opposite to each other, and the two sets of light guide surfaces 31 of the light guide plate 3 form at least one zigzag groove surface. The number of each group of light guide surfaces 31 is equal to the number of the formed zigzag groove surfaces, for example, one group of light guide surfaces 31 includes one light guide surface 31, and then the number of the zigzag groove surfaces formed by two groups of light guide surfaces 31 is one. The light guide surface 31 with the first inclination angle in the sawtooth-shaped groove surface is used for guiding the light projection direction of the first light source assembly, and the light guide surface 31 with the second inclination angle in the sawtooth-shaped groove surface is used for guiding the light projection direction of the second light source assembly.
Further, as shown in fig. 2, the first light source assembly and the second light source assembly are disposed oppositely, the backlight 2 further includes a third light source assembly and a fourth light source assembly disposed oppositely, the first light source assembly, the second light source assembly, the third light source assembly and the fourth light source assembly enclose a quadrangle, and correspondingly, the light guide surface 31 of the light guide plate 3 forms at least one square-cone-shaped groove surface. Wherein, the number of each group of light guide surfaces 31 is equal to the number of the square tapered groove surfaces formed, for example, a group of light guide surfaces comprises four light guide surfaces 31, and the number of the zigzag groove surfaces formed by the light guide surfaces is four. The light guide surface 31 with the third inclination angle in the square conical groove surface is used for guiding the light projection direction of the first light source assembly, the light guide surface 31 with the fourth inclination angle in the square conical groove surface is used for guiding the light projection direction of the second light source assembly, the light guide surface 31 with the fifth inclination angle in the square conical groove surface is used for guiding the light projection direction of the third light source assembly, and the light guide surface 31 with the sixth inclination angle in the square conical groove surface is used for guiding the light projection direction of the fourth light source assembly.
Further, in order to enhance the light transmittance, a non-light-transmitting lampshade 4 covers each light source assembly 21, and the non-light-transmitting lampshade 4 can limit the light emitted by each light source assembly 21 within a preset angle, and ensure that the light source assembly 21 avoids the light leakage phenomenon.
Each light source assembly 21 includes at least one row of leds, and each row of leds includes at least one led, wherein light between different leds is not blocked.
The mobile terminal display screen comprises at least two groups of light source components, wherein light rays emitted by the first light source components form a first full screen picture through the display panel, and light rays emitted by the second light source components form a second full screen picture through the display panel, so that a user can view respective full screen pictures when the screen is shared. In addition, the visible range of the second full-screen image and the visible range of the first full-screen image are not overlapped within a preset angle range, so that the respective full-screen images can not interfere and influence the images of the other side when different users share the screen, and the user experience is further improved.
Further, as shown in fig. 3, the shooting method provided by the embodiment of the present invention specifically includes the following steps:
step 301: and acquiring first face data acquired by the first front camera and second face data acquired by the second front camera.
The first face data and the second face data are image data or video data including faces. Specifically, mobile terminal includes backlight and display panel, and the backlight includes two at least light source subassemblies, and different light source subassemblies correspond the first face data that first leading camera gathered and the second face data that the leading camera of second gathered respectively. Specifically, when two users, namely a left user and a right user, share the self-timer at the same time, the left user A located in the visual range of the first full-screen picture collects first face images or video data through the first front-facing camera, and the right user B located in the visual range of the second full-screen picture collects second face images or video data through the second front-facing camera.
Step 302: and controlling a display screen to alternately display the first face data and the second face data according to a preset display frame rate.
When the user starts the self-timer sharing, the first face data or the second face data corresponding to each light source assembly are alternately displayed according to the preset display frame rate, so that the effect of sharing self-timer by different users is achieved. For example, the first light source component corresponds to the first front-facing camera, so as to control the display screen to display first face data for a left user a located in the visible range of the first full-screen picture to watch, and the second light source component corresponds to the second front-facing camera, so as to control the display screen to display second face data for a right user B located in the visible range of the second full-screen picture to watch.
Based on the structure of the display screen, the embodiment of the invention can realize that two full-screen images can be watched in different angle ranges, specifically, first face data collected by a first front-facing camera can be watched in the visual range of a first full-screen image, and second face data collected by a second front-facing camera can be watched in the visual range of a second full-screen image, so that two users can watch two full-screen self-shooting images without mutual influence.
Further, step 302 specifically includes: turning on the first light source component, turning off the second light source component at the same time, and controlling the display screen to display the first face data; after a preset time interval, closing the first light source component, simultaneously opening the second light source component, and controlling the display screen to display second face data; after the preset time interval, the first light source component is started and the second light source component is simultaneously closed in a circulating execution mode, the step that the display screen displays the first face data is controlled to reach the preset time interval, the first light source component is closed, the second light source component is started simultaneously, and the step that the display screen displays the second face data is controlled until the screen sharing function is detected to be closed, or until the first front camera or the second front camera is detected to be closed.
Specifically, after the display screen is lighted, the shot data picture is refreshed and displayed according to a preset frame rate, the LED lamps on the two sides are simultaneously turned on, at this time, the mobile terminal detects the turning-on state of the screen sharing function, and if the turning-on state is detected to indicate that the screen sharing function is turned on and the first front camera and the second front camera are both turned on, the first face data collected by the first front camera and the second face data collected by the second front camera are acquired, that is, step 301 is executed. If the on-state is detected to indicate that the screen sharing function is off or the first front camera or the second front camera is off, all light source assemblies in the backlight source are turned on, namely the LED lamps on the two sides are turned on simultaneously.
The specific implementation principle of the screen sharing function when two users (left user a and right user B) shoot simultaneously will be further described below with reference to the specific schematic timing diagram of alternately lighting the LEDs. Specifically, as shown in fig. 4, (1) the right LED lamp is turned on, and the left LED lamp is turned off at the same time, the first frame of the display panel first displays the first frame display content of the first face data collected by the first front camera, and at this time, only the left user a can see the frame display content. (2) After the interval of the preset time interval, the right LED lamp is turned off, the left LED lamp is turned on at the same time, the second frame of the display panel displays the first frame display content of the second face data collected by the second front camera, and only the right user B can see the picture display content. (3) After the interval of the preset time interval, the LED lamp on the right side is turned on, the LED lamp on the left side is turned off at the same time, the third frame of the display panel displays the second frame display content of the first face data collected by the first front camera, and only the user A on the left side can see the picture display content at this time. (4) After the interval of the preset time interval, the right LED lamp is turned off, the left LED lamp is turned on at the same time, the fourth frame of the display panel displays the second frame display content of the second face data collected by the second front camera, and only the right user B can see the picture display content.
According to the method, the self-timer pictures collected by the first front camera and the second front camera are alternately displayed, and when the preset time interval is small enough, namely the preset display frame rate is fast enough, the left user and the right user can watch the continuous and non-flickering display contents. In order to ensure consistency of image display, the preset display frame rate in the embodiment of the present invention is generally greater than a display frame rate of a general display screen. Assuming that the display frame rate in the prior art is 60 frames/second, the preset display frame rate in the embodiment of the present invention needs to be 120 frames/second, so as to ensure the display effect of each data frame. That is, after the screen sharing function is turned on, the mobile terminal alternately displays self-timer frames of the left and right users, so that the left and right users can view the respective required pictures at the same time and can only view the respective required pictures.
Further, step 302 specifically includes: according to a preset display frame rate, when the display screen is controlled to display the first face data in a full screen mode, displaying second face data in a first preset area of the display screen; and when the display screen is controlled to display the second face data in a full screen mode, displaying the first face data in a second preset area of the display screen. Specifically, the first preset area and the second preset area may be a small window area in a display panel of the display screen, for example, a preview interface of the first front camera is displayed in a full screen image visible by the left user a, and a small window is reserved to display a preview interface of the second front camera; and displaying a preview interface of the second front camera in a full screen picture visible by the right user B, and reserving a small window to display the preview interface of the first front camera.
On the other hand, step 302 may further specifically include: dividing a display area of a display screen into a first display area and a second display area; according to a preset display frame rate, when the first display area is controlled to display the first face data, displaying second face data in the second display area; and when the first display area is controlled to display the second face data, the first face data is displayed in the second display area. That is, the first display region and the second display region may also both occupy 1/2 area of the display panel of the display screen. For example, a display panel of the display screen is divided into a left interface and a right interface which are equal in area, the left interface and the right interface respectively display preview interfaces of the first front camera and the second front camera, and two users can make the same expression according to the expression of the other user.
Further, after the step 302, the method further includes: detecting a first preset action for triggering expression snapshot; if the first preset action is detected, obtaining the expression similarity of the first face data and the second face data; and if the expression similarity is higher than a preset threshold value, respectively controlling the first front camera and the second front camera to execute shooting operation. Wherein, the first preset action comprises: at least one of a touch screen or spaced gesture control action, a head swing control action, a facial expression action, and a finger pointing control action. Therefore, when the expression similarity of the first face data and the second face data is higher than a preset threshold value, the self-timer pictures of different users can be shot simultaneously, and the expression synchronization of different users is realized. For example, the left user and the right user can respectively trigger a continuous shooting mode through a gesture instruction, the left user A and the right user B respectively start continuous shooting according to the expression of the other user in the preview small window, the spirit synchronization of the expressions of the two users is realized, and a series of interesting photos are captured.
Wherein, if a first preset action is detected, the step of obtaining the expression similarity of the first face data and the second face data comprises: if the first preset action is detected, starting a timer with preset duration; and when the preset duration of the timer is reached, obtaining the expression similarity of the first face data and the second face data. The trigger corresponding to the continuous shooting mode can be any one of the left-side user A and the right-side user B, for example, the left-side user A blinks 3 times to trigger a 3-second timer, after the time of the timer is up, the first front camera and the second front camera simultaneously start to detect the actions of the left-side user A and the right-side user B, when the expression similarity of the left-side user A and the right-side user B is detected to be larger than a preset threshold value, a snapshot operation is performed, and the subsequent snapshot operations are sequentially realized, so that different users can shoot a series of interesting photos with synchronous expressions when sharing self-shooting.
Further, after alternately displaying the first face data collected by the first front camera and the second face data collected by the second front camera, the method further includes: detecting whether a shooting control instruction is received or not; and if the shooting control instruction is detected, controlling the first front camera and/or the second front camera to execute shooting operation according to the shooting control instruction. And the shooting control instruction is associated with the rotation angle and the rotation direction of the first front camera and/or the second front camera. Specifically, first leading camera and the leading camera of second can be according to shooing control command, respectively according to certain angle deviation to corresponding user's auto heterodyne direction, and the angle of two leading cameras all can control the deflection, can the angle of the leading camera in left side be partial to left side user A's auto heterodyne direction, and the angle of the leading camera in right side is partial to right side user B's auto heterodyne direction.
Specifically, as shown in fig. 5, the first front camera 5 and the second front camera 6 are rear cameras with adjustable shooting angles, that is, both the first front camera 5 and the second front camera 6 can rotate. Specifically, the step of detecting whether the photographing control instruction is received includes: and respectively controlling the first front camera and the second front camera to perform control action detection. If the shooting control instruction is detected, the step of controlling the first front camera and/or the second front camera to execute the shooting operation according to the shooting control instruction comprises the following steps: if the first front-facing camera detects a first control action in the visible range of the first full-screen picture, controlling the first front-facing camera to execute shooting operation according to the first control action; and if the second front-facing camera detects a second control action in the visible range of the second full-screen picture, controlling the second front-facing camera to execute shooting operation according to the second control action. That is to say, through first leading camera and the second leading camera respectively the seizure control moves, the control move includes: at least one of a touch screen or spaced gesture control action, a head swing control action, a facial expression action, and a finger pointing control action. Head swing action includes turning round the head left and for camera certain angle of rotation left, upwards lifts the head and for certain angle of rotation upwards etc. touch screen or separate empty gesture action and include: draw "V", "O", etc. to specify the shape, the finger pointing control action includes: the finger is right for camera certain angle of rotation right, the finger upwards for camera certain angle of rotation upwards etc. facial expression action includes: blinking, smiling, etc. And if the first front-facing camera detects that the left user A twists the head leftwards in the visual range of the first full-screen picture, controlling the first front-facing camera to rotate leftwards by a certain angle and executing shooting operation. And if the second front-facing camera detects that the right user B twists the head leftwards in the visual range of the second full-screen picture, controlling the first front-facing camera to rotate leftwards by a certain angle and executing shooting operation.
Specifically, in a specific application process, a user unlocks the display screen and starts the camera, the LEDs on the two sides are controlled to be started simultaneously, and the display screen refreshes and displays a self-timer picture of the user. When a plurality of users shoot by themselves at the same time, whether the users click to enter the sharing self-shooting function is judged, if not, the LEDs on the two sides are continuously controlled to be simultaneously turned on, and a picture is displayed. If the fact that the user clicks to enter the shared self-timer function is detected, previews or shooting pictures of a first front camera and a second front camera (a left front camera and a right front camera) are displayed alternately according to a sequence diagram shown in a sequence diagram 4, so that the left shooting user and the right shooting user can watch self-timer full-screen previews or shooting pictures simultaneously. When the front cameras corresponding to the front cameras respectively detect that the user inputs an instruction for rotating the cameras, if the head swings up and down and left and right (the front cameras detect the instruction), when the fingers/palms point to the upper part, the lower part, the left part and the right part, the cameras respectively rotate a certain angle corresponding to the upper part, the lower part, the left part and the right part, the visual angle for shooting or previewing is changed, and the deviated angle is recorded.
Further, after the step 302, the method further includes: detecting a second preset action for finishing shooting; and if the second preset action is detected, controlling the first front camera and/or the second front camera to output the face image data or the face video data. Wherein the second preset action comprises at least one of a touch screen or an air gesture control action, a head swing control action, a facial expression action and a finger pointing control action.
Specifically, the step of detecting a second preset action for ending the photographing includes: respectively controlling the first front camera and the second front camera to carry out second preset action detection; if the second preset action is detected, the step of controlling the first front camera and/or the second front camera to output image data or video data comprises the following steps: if the first front-facing camera detects a second preset action in the visible range of the first full-screen picture, controlling the first front-facing camera to output face image data or face video data; and if the second front-facing camera detects a second preset action in the visible range of the second full-screen picture, controlling the second front-facing camera to output the face image data or the face video data. That is to say, when the user has a self-timer, preset actions such as blinking, scissors, smiling and the like are performed, and the current photo or video of the user is cached. For example, when the left-side user a takes a self-timer shot, preset actions such as blinking, scissors, smiling and the like are performed, and the picture or video of the left-side user a is cached. And when the right user B carries out self-shooting, executing preset actions such as blinking, scissors and smiling shooting, and caching the picture or video of the right user B.
On the other hand, detecting the preset action for ending the shooting may also be that the user clicks a corresponding function key, and specifically, the step of detecting the preset action for ending the shooting may also be implemented by: respectively detecting touch operations on a first shooting button in a first full-screen picture and a second shooting button in a second full-screen picture; if the preset action is detected, the step of controlling the first front-facing camera and/or the second front-facing camera to output the face image data or the face video data comprises the following steps: if a first touch operation on the first shooting button is detected, controlling the first front-facing camera to output image data or video data; and if the second touch operation on the second shooting button is detected, controlling the second front-facing camera to output the face image data or the face video data. That is to say, when the user needs to take a self-timer, the user can click the corresponding shooting button, and after the touch operation on the shooting button is detected, the current photo or video of the user is cached. For example, when the left user a needs to take a self-timer, the corresponding shooting button can be clicked, and after the touch operation on the shooting button is detected, the picture or video of the left user a is cached; when the right user B needs to shoot by oneself, a corresponding shooting button can be clicked, and after the touch operation on the shooting button is detected, the picture or the video of the right user B is cached.
Further, after the step 302, the method further includes: carrying out image synthesis on the first face data and the second face data to generate a face image; or performing video synthesis on the first face data and the second face data to generate a face video. Specifically, according to the recorded deviation angle of the camera, the first data collected by the first front camera and the second data collected by the second front camera are combined into a left-right wide-view-angle photo or video in the background. When two left and right users use different front-facing cameras to carry out self-shooting simultaneously, full-screen preview and shooting pictures corresponding to the left and right users can be displayed on a display screen simultaneously, the shooting angles of the users are further adjusted in modes of head rotation, gestures and the like by detecting the corresponding users through the front-facing cameras, and the users at the left and right corresponding visual angles only see the pictures needed by the users and cannot generate interference on the user pictures at the other visual angle. In addition, after the shooting is finished, the data shot by the two users can be synthesized to generate a large-view-angle photo or video.
The manner of acquiring the shooting control instruction through the front-facing camera is described above, and the manner of controlling the shooting process through voice will be further described below. Specifically, the step of detecting whether the photographing control instruction is received includes: acquiring voice data collected by a microphone; comparing the voice data collected by the microphone with pre-stored reference voice data; and if the voice data collected by the microphone is matched with at least one item of prestored reference voice data, determining that a shooting control instruction is received. Wherein the reference voice data includes: first reference voice data associated with the first front-facing camera, and second reference voice data associated with the second front-facing camera. That is to say, through gathering the speech information of user's input, control first leading camera and second leading camera and carry out angle rotation and shoot the operation.
In a specific application process, a user unlocks the display screen and starts the camera, the LEDs on the two sides are controlled to be started simultaneously, and the display screen refreshes and displays a shooting picture of the user. When a plurality of users shoot by themselves at the same time, whether the users click to enter the sharing self-shooting function is judged, and if not, the LEDs on the two sides are continuously controlled to be simultaneously turned on. If the fact that the user clicks to enter the shared self-shooting function is detected, previews or shooting pictures of a first front camera and a second front camera (a left front camera and a right front camera) are displayed alternately according to a sequence diagram shown in a sequence diagram 4, so that the left self-shooting user and the right self-shooting user can view self-shooting full-screen previews or shooting pictures simultaneously. Further, if a shooting control instruction is detected, the step of controlling the first front camera and/or the second front camera to execute the shooting operation according to the shooting control instruction comprises: if the voice data collected by the microphone is matched with the first reference voice data, controlling the first front-facing camera to execute shooting operation according to the voice data collected by the microphone; and if the voice data collected by the microphone is matched with the second reference voice data, controlling the second front-facing camera to execute shooting operation according to the voice data collected by the microphone. When the microphone collects the voice input of the left user A, namely up, down, left and right, the corresponding first front-mounted camera deflects by certain angles up, down, left and right respectively, and simultaneously records the angle difference of deflection of the corresponding camera, and when the microphone collects the voice input of the right user B, namely up, down, left and right, the corresponding second front-mounted camera deflects by certain angles up, down, left and right respectively, and simultaneously records the angle difference of deflection of the corresponding camera.
Further, before the step of detecting whether the shooting control instruction is received, the method further comprises: respectively displaying first prompt information and second prompt information on a first full-screen picture and a second full-screen picture; acquiring first reference voice information of a first user and second reference voice information of a second user, which are acquired by a microphone; and respectively establishing a first incidence relation between the first front camera and the first reference voice information and a second incidence relation between the second front camera and the second reference voice information. The first prompt message is used for prompting the first user to input voice information, and the second prompt message is used for prompting the second user to input voice information. That is, a prompt interface is displayed at the left screen view angle, the left user a is prompted to speak, and the voice data of the left user a is recorded through the integrated microphone and stored as the reference voice information. And similarly, a prompt interface is displayed at the viewing angle of the right screen to prompt the right user to speak, and the voice data of the right user B is recorded by the integrated microphone and stored as reference voice information.
Further, the step of detecting whether the shooting control instruction is received further comprises the following steps: detecting a preset action for ending shooting; and if the preset action is detected, controlling the first front camera and/or the second front camera to output image data or video data.
Specifically, the step of detecting the preset action for ending the shooting can also be realized by acquiring voice information. When a user carries out photographing or video recording operation, the user inputs voice information such as photographing and video recording through voice, the mobile terminal starts photographing or video recording after acquiring the corresponding voice information, and the current photo or video of the user is cached. The method further comprises the following steps after the step of controlling the first front camera and/or the second front camera to execute the shooting operation according to the shooting control instruction: image synthesis is carried out on the image data shot by the first front camera and the second front camera to generate a face image; or video synthesis is carried out on the video data shot by the first front camera and the second front camera to generate a face video. Specifically, according to the recorded deviation angle of the camera, the first face data collected by the first front camera and the second face data collected by the second front camera are combined into a left-right wide-view-angle photo or video in the background. When two left and right users use different front-facing cameras to shoot simultaneously, the self-shooting full-screen preview and the shooting picture of the left and right users can be displayed on the display screen simultaneously, the voice data of the left and right users are recorded, the user voice input camera deflection control signal is recognized, the corresponding front-facing camera deflects at a certain angle, the preview and shooting angle of view is adjusted, and the users at the left and right corresponding visual angles only see the required picture and cannot generate interference on the user picture at the other visual angle. In addition, after the shooting is finished, the data shot by the two users can be synthesized to generate a large-view-angle photo or video.
Further, besides the detecting the preset action according to the front-facing camera and the detecting the preset action according to the collected voice data, the embodiment of the present invention further provides a method for adjusting the shooting angle of the camera according to a face tracking method, and specifically, after step 302, the method further includes: respectively controlling a first front camera and a second front camera to track the face of a user, and determining the face position; and adjusting the shooting angle of the first front camera and/or the second front camera according to the position of the face. Specifically, the step of adjusting the shooting angle of the first front camera and/or the second front camera according to the face position includes: if the first face position tracked by the first front-facing camera exceeds the first shooting range of the first front-facing camera, controlling the first front-facing camera to rotate by an angle until the first face position is located in the first shooting range; and if the position of the second face tracked by the second front camera exceeds the second shooting range of the second front camera, controlling the second front camera to rotate by an angle until the position of the second face is within the first shooting range. The first shooting range is the central view field range of the first front camera, and the second shooting range is the central view field range of the second front camera.
Further, the step of detecting whether the shooting control instruction is received further comprises the following steps: detecting a preset action for ending shooting; and if the preset action is detected, controlling the first front camera and/or the second front camera to output image data or video data. Specifically, the method further comprises the following steps after the step of controlling the first front camera and/or the second front camera to execute the shooting operation according to the shooting control instruction: image synthesis is carried out on the image data shot by the first front camera and the second front camera to generate a face image; or video synthesis is carried out on the video data shot by the first front camera and the second front camera to generate a face video. Specifically, according to the recorded deviation angle of the camera, the first face data collected by the first front camera and the second face data collected by the second front camera are combined into a left-right wide-view-angle photo or video in the background. When two left and right users use different front-facing cameras to shoot simultaneously, the self-shooting full-screen preview and the shooting picture of the left and right users can be displayed on the display screen simultaneously, the voice data of the left and right users are recorded, the user voice input camera deflection control signal is recognized, the corresponding front-facing camera deflects at a certain angle, the preview and shooting angle of view is adjusted, and the users at the left and right corresponding visual angles only see the required picture and cannot generate interference on the user picture at the other visual angle. In addition, after the shooting is finished, the data shot by the two users can be synthesized to generate a large-view-angle photo or video. By adopting the mode, different users can see two full-screen self-shooting pictures in the self-shooting process, the two full-screen self-shooting pictures cannot influence each other, the front cameras corresponding to the different users can be controlled to carry out self-shooting respectively, and the output self-shooting data is combined into a large-viewing-angle photo or video.
The above embodiments respectively describe the shooting methods in different scenes in detail, and the mobile terminal corresponding to the above embodiments will be further described with reference to fig. 7 and 8.
As shown in fig. 6, the mobile terminal 600 according to the embodiment of the present invention can obtain first face data acquired by a first front-facing camera and second face data acquired by a second front-facing camera in the foregoing embodiment; according to the preset display frame rate, the display screen is controlled to alternately display the details of the first face data method and the second face data method, and the same effect is achieved, wherein the mobile terminal 600 specifically comprises the display screen, a first front camera and a second front camera, and the display screen comprises: the backlight source comprises at least two light source components, and each light source component is respectively arranged on different side edges of the light guide plate; wherein, the light that the first light source subassembly in at least two light source subassemblies sent out throws to display panel through the light guide plate to the incident angle that is no less than first angle and forms first full-screen picture, and the visual scope of first full-screen picture is: a light angle range of the light exiting the display panel at an exit angle not less than the first angle; the light emitted by a second light source in the at least two light source components is projected to the display panel through the light guide plate at an incident angle not smaller than a second angle to form a second full screen picture, and the visible range of the second full screen picture is as follows: a light angle range of the light exiting the display panel at an exit angle not less than the second angle; the visible range of the second full screen picture and the visible range of the first full screen picture are not overlapped within a preset angle range. In addition, the mobile terminal 600 further includes the following functional modules:
the first acquiring module 610 is configured to acquire first face data acquired by a first front-facing camera and second face data acquired by a second front-facing camera;
the display module 620 is configured to control the display screen to alternately display the first face data and the second face data according to a preset display frame rate;
the first face data and the second face data are image data or video data including faces.
As shown in fig. 7, the display module 620 includes:
the first display sub-module 621 is configured to turn on the first light source assembly, turn off the second light source assembly, and control the display screen to display the first face data;
the second display sub-module 622 is configured to turn off the first light source module and turn on the second light source module at the same time after a preset time interval, and control the display screen to display second face data;
and the first processing submodule 623 is configured to, after a preset time interval, cyclically start the first light source assembly and simultaneously close the second light source assembly, control the display screen to display the first face data until the preset time interval, stop the first light source assembly and simultaneously start the second light source assembly, and control the display screen to display the second face data until the screen sharing function is detected to be turned off, or until the first front camera or the second front camera is detected to be turned off.
Wherein, the display module 620 further comprises:
the third display sub-module 624 is configured to, according to the preset display frame rate, display the second face data in the first preset area of the display screen when the display screen is controlled to display the first face data in a full screen;
and a fourth display sub-module 625, configured to display the first face data in a second preset area of the display screen when the display screen is controlled to display the second face data in a full screen.
Wherein, the display module is specifically used for:
dividing a display area of a display screen into a first display area and a second display area;
according to a preset display frame rate, when a first display area is controlled to display the first face data, displaying second face data in a second display area;
and when the first display area is controlled to display the second face data, the first face data is displayed in the second display area.
Wherein, the mobile terminal 600 further includes:
a first detection module 630, configured to detect whether a shooting control instruction is received;
the first processing module 640 is configured to, if a shooting control instruction is detected, control the first front-facing camera and/or the second front-facing camera to perform shooting operation according to the shooting control instruction;
and the shooting control instruction is associated with the rotation angle and the rotation direction of the first front camera and/or the second front camera.
Wherein, the first detecting module 630 includes:
the first detection submodule 631 is used for respectively controlling the first front camera and the second front camera to perform control action detection;
the first processing module 640 includes:
the first shooting submodule 641 is configured to, if the first front-facing camera detects a first control action within the visible range of the first full-screen picture, control the first front-facing camera to execute a shooting operation according to the first control action;
the second shooting sub-module 642 is configured to, if the second front-facing camera detects a second control action within the visible range of the second full-screen picture, control the second front-facing camera to execute a shooting operation according to the second control action.
Wherein the control action comprises: at least one of a touch screen or spaced gesture control action, a head swing control action, a facial expression action, and a finger pointing control action.
Wherein, the first detecting module 630 further includes:
a first obtaining submodule 632, configured to obtain voice data collected by a microphone;
the comparison submodule 633 is used for comparing the voice data collected by the microphone with the pre-stored reference voice data;
the second processing sub-module 634, configured to determine that a shooting control instruction is received if the voice data collected by the microphone matches at least one item of pre-stored reference voice data;
wherein the reference voice data includes: first reference voice data associated with the first front-facing camera, and second reference voice data associated with the second front-facing camera.
Wherein the second processing sub-module 634 includes:
the first shooting unit 6341 is configured to control the first front-facing camera to perform shooting operation according to the voice data collected by the microphone if the voice data collected by the microphone matches the first reference voice data;
and the second shooting unit 6342 is configured to control the second front camera to perform a shooting operation according to the voice data collected by the microphone if the voice data collected by the microphone matches the second reference voice data.
Wherein, the first detecting module 630 further includes:
the prompt submodule 635 is configured to display first prompt information and second prompt information on the first full screen picture and the second full screen picture, respectively;
the second obtaining submodule 636 is configured to obtain first reference voice information of the first user and second reference voice information of the second user, which are collected by the microphone;
the third processing submodule 637 is configured to establish a first association relationship between the first front-facing camera and the first reference voice information and a second association relationship between the second front-facing camera and the second reference voice information, respectively;
the first prompt message is used for prompting the first user to input voice information, and the second prompt message is used for prompting the second user to input voice information.
Wherein, the mobile terminal 600 further includes:
the face tracking module 650 is configured to control the first front-facing camera and the second front-facing camera to perform face tracking, and determine a face position;
the adjusting module 660 is configured to adjust a shooting angle of the first front camera and/or the second front camera according to the position of the face.
Wherein the adjusting module 660 comprises:
the first adjusting submodule 661, configured to control the first front-facing camera to perform angular rotation until the first face position is within the first shooting range if the first face position tracked by the first front-facing camera exceeds the first shooting range of the first front-facing camera;
the second adjusting submodule 662 is configured to, if the second face position tracked by the second front camera exceeds a second shooting range of the second front camera, control the second front camera to perform angular rotation until the second face position is within the first shooting range;
the first shooting range is the central view field range of the first front camera, and the second shooting range is the central view field range of the second front camera.
Wherein, the mobile terminal 600 further includes:
the second detection module 670 is configured to detect a first preset action for triggering expression snapshot;
the second obtaining module 680 is configured to obtain the expression similarity between the first face data and the second face data if the first preset action is detected;
the shooting module 690 is configured to respectively control the first front camera and the second front camera to execute a shooting operation if the expression similarity is higher than a preset threshold;
wherein, the first preset action comprises: at least one of a touch screen or spaced gesture control action, a head swing control action, a facial expression action, and a finger pointing control action.
Wherein the second obtaining module 680 includes:
a starting module 681, configured to start a timer with a preset duration if the first preset action is detected;
and the third obtaining submodule 682 is configured to obtain the expression similarity between the first face data and the second face data if the preset duration of the timer reaches.
Wherein, the mobile terminal 600 further includes:
a third detecting module 6100 for detecting a second preset action for ending the photographing;
the second processing module 6110, configured to control the first front-facing camera and/or the second front-facing camera to output face image data or face video data if a second preset action is detected;
wherein the second preset action comprises at least one of a touch screen or an air gesture control action, a head swing control action, a facial expression action and a finger pointing control action.
Wherein, the third detecting module 6100 includes:
a second detection submodule 6101, configured to control the first front-facing camera and the second front-facing camera to perform second preset action detection, respectively;
the second processing module 6110 includes:
a fourth processing submodule 6111, configured to control the first front-facing camera to output face image data or face video data if the first front-facing camera detects a second preset action within the visible range of the first full-screen picture;
the fifth processing sub-module 6112 is configured to control the second front-facing camera to output the face image data or the face video data if the second front-facing camera detects a second preset action within the visible range of the second full-screen picture.
Wherein, the third detecting module 6100 further includes:
a third detection submodule 6102, configured to detect touch operations on the first shooting button in the first full screen picture and the second shooting button in the second full screen picture, respectively;
the second processing module 6110 includes:
the sixth processing sub-module 6113, configured to control the first front-facing camera to output image data or video data if the first touch operation on the first shooting button is detected;
the seventh processing sub-module 6114 is configured to control the second front-facing camera to output face image data or face video data if the second touch operation on the second shooting button is detected.
Wherein, the mobile terminal 600 further includes:
a first synthesis module 6120, configured to perform image synthesis on the first face data and the second face data to generate a face image;
alternatively, the first and second electrodes may be,
the second synthesizing module 6130 is configured to perform video synthesis on the first face data and the second face data to generate a face video.
Wherein, the mobile terminal 600 further includes:
a fourth detecting module 6140, configured to detect a screen sharing function and an open state of the first front camera and/or the second front camera;
the third processing module 6150 is configured to, if it is detected that the screen sharing function is turned on and both the first front-facing camera and the second front-facing camera are turned on, obtain first face data acquired by the first front-facing camera and second face data acquired by the second front-facing camera.
Wherein, the mobile terminal 600 further includes:
the fourth processing module 6160 is configured to turn on all light source modules in the backlight source if it is detected that the screen sharing function is turned off, or it is detected that the first front-facing camera or the second front-facing camera is turned off.
It is to be noted that the mobile terminal according to the embodiment of the present invention is a mobile terminal corresponding to the above shooting method, and both the implementation manner and the achieved technical effect of the above method are applicable to the embodiment of the mobile terminal. The display screen of the mobile terminal can form two full-screen pictures which are not mutually influenced at different visual angles, when a plurality of users self-shoot simultaneously, self-shooting contents of different users can be alternately displayed on the display screen, so that the users can see the full-screen pictures self-shot respectively in respective visual angles, and the mutual influence between the two pictures is avoided while multi-user self-shooting sharing and display sharing are realized.
In order to better achieve the above object, an embodiment of the present invention further provides a terminal, which includes a processor, a memory, and a computer program stored in the memory and running on the processor, and when the processor executes the computer program, the steps in the shooting method described above are implemented. An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the shooting method described above.
Fig. 8 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention. Specifically, the mobile terminal 800 in fig. 8 may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), or a vehicle-mounted computer.
The mobile terminal 800 in fig. 8 includes a power supply 810, a memory 820, an input unit 830, a display unit 840, a photographing component 850, a processor 860, a wifi (wireless fidelity) module 870, an audio circuit 880, and an RF circuit 890, wherein the photographing component 850 includes a first front camera 851 and a second front camera 852. In addition, the mobile terminal further includes: display screen, its characterized in that, the display screen includes: the backlight source comprises at least two light source components, and each light source component is respectively arranged on different side edges of the light guide plate; wherein, the light that the first light source subassembly in at least two light source subassemblies sent out throws to display panel through the light guide plate to the incident angle that is no less than first angle and forms first full-screen picture, and the visual scope of first full-screen picture is: a light angle range of the light exiting the display panel at an exit angle not less than the first angle; the light emitted by a second light source in the at least two light source components is projected to the display panel through the light guide plate at an incident angle not smaller than a second angle to form a second full screen picture, and the visible range of the second full screen picture is as follows: a light angle range of the light exiting the display panel at an exit angle not less than the second angle; the visible range of the second full screen picture and the visible range of the first full screen picture are not overlapped within a preset angle range.
The input unit 830 may be used, among other things, to receive user-input information and to generate signal inputs related to user settings and function control of the mobile terminal 800. Specifically, in the embodiment of the present invention, the input unit 830 may include a touch panel 831. The touch panel 831, also referred to as a touch screen, can collect touch operations performed by a user on or near the touch panel 831 (e.g., operations performed by the user on the touch panel 831 using a finger, a stylus, or any other suitable object or accessory), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 831 may include two portions, i.e., a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 860, and can receive and execute commands sent by the processor 860. In addition, the touch panel 831 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 831, the input unit 830 may include other input devices 832, and the other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Among them, the display unit 840 may be used to display information input by a user or information provided to the user and various menu interfaces of the mobile terminal. The display unit 840 may include a display panel 841, and the display panel 841 may be alternatively configured in the form of an LCD or an Organic Light-Emitting Diode (OLED), or the like.
It should be noted that the touch panel 831 can overlay the display panel 841 to form a touch display screen, which, when it detects a touch operation thereon or nearby, is passed to the processor 860 to determine the type of touch event, and then the processor 860 provides a corresponding visual output on the touch display screen according to the type of touch event.
The touch display screen comprises an application program interface display area and a common control display area. The arrangement modes of the application program interface display area and the common control display area are not limited, and can be an arrangement mode which can distinguish two display areas, such as vertical arrangement, left-right arrangement and the like. The application interface display area may be used to display an interface of an application. Each interface may contain at least one interface element such as an icon and/or widget desktop control for an application. The application interface display area may also be an empty interface that does not contain any content. The common control display area is used for displaying controls with high utilization rate, such as application icons like setting buttons, interface numbers, scroll bars, phone book icons and the like.
The processor 860 is a control center of the mobile terminal, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the first memory 921 and calling data stored in the second memory 822, thereby integrally monitoring the mobile terminal. Optionally, processor 860 may include one or more processing units.
In this embodiment of the present invention, the mobile terminal 800 further includes: the computer program stored in the memory 820 and operable on the processor 860, in particular, by calling software programs and/or modules stored in the first memory 821 and/or data stored in the second memory 822, the processor 860 performs the following steps: acquiring first face data acquired by a first front camera and second face data acquired by a second front camera;
controlling a display screen to alternately display first face data and second face data according to a preset display frame rate;
the first face data and the second face data are image data or video data including faces.
In particular, the computer program when executed by the processor 860 performs the steps of: turning on the first light source component, turning off the second light source component at the same time, and controlling the display screen to display the first face data;
after a preset time interval, closing the first light source component, simultaneously opening the second light source component, and controlling the display screen to display second face data;
after the preset time interval, the first light source component is started and the second light source component is simultaneously closed in a circulating execution mode, the step that the display screen displays the first face data is controlled to reach the preset time interval, the first light source component is closed, the second light source component is started simultaneously, and the step that the display screen displays the second face data is controlled until the screen sharing function is detected to be closed, or until the first front camera or the second front camera is detected to be closed.
In particular, the computer program when executed by the processor 860 performs the steps of: according to a preset display frame rate, when the display screen is controlled to display the first face data in a full screen mode, displaying second face data in a first preset area of the display screen;
and when the display screen is controlled to display the second face data in a full screen mode, displaying the first face data in a second preset area of the display screen.
Wherein the computer program when executed by the processor 860 performs the steps of: dividing a display area of a display screen into a first display area and a second display area;
according to a preset display frame rate, when the first display area is controlled to display the first face data, displaying second face data in the second display area;
and when the first display area is controlled to display the second face data, the first face data is displayed in the second display area.
Further, the computer program when executed by the processor 860 performs the steps of: detecting whether a shooting control instruction is received or not;
if the shooting control instruction is detected, controlling the first front camera and/or the second front camera to execute shooting operation according to the shooting control instruction;
and the shooting control instruction is associated with the rotation angle and the rotation direction of the first front camera and/or the second front camera.
Wherein the computer program when executed by the processor 860 performs the steps of: respectively controlling the first front camera and the second front camera to perform control action detection;
if the first front-facing camera detects a first control action in the visible range of the first full-screen picture, controlling the first front-facing camera to execute shooting operation according to the first control action;
and if the second front-facing camera detects a second control action in the visible range of the second full-screen picture, controlling the second front-facing camera to execute shooting operation according to the second control action.
Wherein the control action comprises: at least one of a touch screen or spaced gesture control action, a head swing control action, a facial expression action, and a finger pointing control action.
In particular, the computer program when executed by the processor 860 performs the steps of: acquiring voice data collected by a microphone;
comparing the voice data collected by the microphone with pre-stored reference voice data;
if the voice data collected by the microphone is matched with at least one item of prestored reference voice data, determining that a shooting control instruction is received;
wherein the reference voice data includes: first reference voice data associated with the first front-facing camera, and second reference voice data associated with the second front-facing camera.
In particular, the computer program when executed by the processor 860 performs the steps of: if the voice data collected by the microphone is matched with the first reference voice data, controlling the first front-facing camera to execute shooting operation according to the voice data collected by the microphone;
and if the voice data collected by the microphone is matched with the second reference voice data, controlling the second front-facing camera to execute shooting operation according to the voice data collected by the microphone.
In particular, the computer program when executed by the processor 860 performs the steps of: respectively displaying first prompt information and second prompt information on a first full-screen picture and a second full-screen picture;
acquiring first reference voice information of a first user and second reference voice information of a second user, which are acquired by a microphone;
respectively establishing a first incidence relation between a first front camera and first reference voice information and a second incidence relation between a second front camera and second reference voice information;
the first prompt message is used for prompting the first user to input voice information, and the second prompt message is used for prompting the second user to input voice information.
In particular, the computer program when executed by the processor 860 performs the steps of: respectively controlling a first front camera and a second front camera to track the face of a user, and determining the face position;
and adjusting the shooting angle of the first front camera and/or the second front camera according to the position of the face.
In particular, the computer program when executed by the processor 860 performs the steps of: if the first face position tracked by the first front-facing camera exceeds the first shooting range of the first front-facing camera, controlling the first front-facing camera to rotate by an angle until the first face position is located in the first shooting range;
if the position of a second face tracked by the second front-facing camera exceeds a second shooting range of the second front-facing camera, controlling the second front-facing camera to rotate by an angle until the position of the second face is within the first shooting range;
the first shooting range is the central view field range of the first front camera, and the second shooting range is the central view field range of the second front camera.
In particular, the computer program when executed by the processor 860 performs the steps of: detecting a first preset action for triggering expression snapshot;
if the first preset action is detected, obtaining the expression similarity of the first face data and the second face data;
if the expression similarity is higher than a preset threshold value, respectively controlling the first front-facing camera and the second front-facing camera to execute shooting operation;
wherein, the first preset action comprises: at least one of a touch screen or spaced gesture control action, a head swing control action, a facial expression action, and a finger pointing control action.
In particular, the computer program when executed by the processor 860 performs the steps of: if the first preset action is detected, starting a timer with preset duration;
and when the preset duration of the timer is reached, obtaining the expression similarity of the first face data and the second face data.
In particular, the computer program when executed by the processor 860 performs the steps of: detecting a second preset action for finishing shooting;
if the second preset action is detected, controlling the first front-facing camera and/or the second front-facing camera to output face image data or face video data;
wherein the second preset action comprises at least one of a touch screen or an air gesture control action, a head swing control action, a facial expression action and a finger pointing control action.
In particular, the computer program when executed by the processor 860 performs the steps of: respectively controlling the first front camera and the second front camera to carry out second preset action detection;
if the first front-facing camera detects a second preset action in the visible range of the first full-screen picture, controlling the first front-facing camera to output face image data or face video data;
and if the second front-facing camera detects a second preset action in the visible range of the second full-screen picture, controlling the second front-facing camera to output the face image data or the face video data.
In particular, the computer program when executed by the processor 860 performs the steps of: respectively detecting touch operations on a first shooting button in a first full-screen picture and a second shooting button in a second full-screen picture;
if a first touch operation on the first shooting button is detected, controlling the first front-facing camera to output image data or video data;
and if the second touch operation on the second shooting button is detected, controlling the second front-facing camera to output the face image data or the face video data.
In particular, the computer program when executed by the processor 860 performs the steps of: carrying out image synthesis on the first face data and the second face data to generate a face image;
alternatively, the first and second electrodes may be,
and carrying out video synthesis on the first face data and the second face data to generate a face video.
In particular, the computer program when executed by the processor 860 performs the steps of: detecting a screen sharing function and the opening state of a first front camera and/or a second front camera;
if the screen sharing function is started, and the first front camera and the second front camera are both started, first face data collected by the first front camera and second face data collected by the second front camera are acquired.
In particular, the computer program when executed by the processor 860 performs the steps of: and if the screen sharing function is detected to be closed, or the first front camera or the second front camera is detected to be closed, all light source components in the backlight source are started.
The display screen of the mobile terminal 800 of the embodiment of the invention can form two full-screen pictures which are not mutually influenced at different viewing angles, and when a plurality of users take a picture at the same time, self-photographing contents of different users can be alternately displayed on the display screen, so that the users can see the full-screen pictures of respective self photographing in respective viewing angles, and the mutual influence between the two pictures is ensured while realizing multi-user self photographing sharing and display sharing.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
While the preferred embodiments of the present invention have been described, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.

Claims (40)

1. A shooting method is applied to a mobile terminal, the mobile terminal comprises a display screen, a first front camera and a second front camera, and the display screen comprises: the backlight source comprises at least two light source components, and each light source component is respectively arranged on different side edges of the light guide plate; the light emitted by a first light source assembly of the at least two light source assemblies is projected to the display panel through the light guide plate at an incident angle not smaller than a first angle to form a first full screen image, and the visible range of the first full screen image is as follows: a range of angles of light exiting the display panel at an exit angle not less than the first angle; the light emitted by a second light source in the at least two light source assemblies is projected to the display panel through the light guide plate at an incident angle not smaller than a second angle to form a second full screen image, and the visible range of the second full screen image is as follows: a range of angles of light exiting the display panel at an exit angle not less than the second angle; the visible range of the second full screen image and the visible range of the first full screen image are not overlapped within a preset angle range;
the shooting method comprises the following steps:
acquiring first face data acquired by a first front camera and second face data acquired by a second front camera;
when the self-timer sharing is started, controlling the display screen to alternately display the first face data and the second face data according to a preset display frame rate;
the first face data and the second face data are image data or video data comprising faces;
the step of controlling the display screen to alternately display the first face data and the second face data according to a preset display frame rate includes:
according to a preset display frame rate, when the display screen is controlled to display first face data in a full screen mode, displaying second face data in a first preset area of the display screen;
and when the display screen is controlled to display second face data in a full screen mode, displaying the first face data in a second preset area of the display screen.
2. The shooting method according to claim 1, wherein the step of controlling the display screen to alternately display the first face data and the second face data according to a preset display frame rate includes:
turning on the first light source component, turning off the second light source component at the same time, and controlling the display screen to display the first face data;
after a preset time interval, closing the first light source component, simultaneously opening the second light source component, and controlling the display screen to display the second face data;
and after a preset time interval, circularly executing the steps of opening the first light source component and closing the second light source component, controlling the display screen to display the first face data until the step of closing the first light source component and opening the second light source component simultaneously at the preset time interval, and controlling the display screen to display the second face data until the screen sharing function is detected to be closed, or until the first front camera or the second front camera is detected to be closed.
3. The shooting method according to claim 1, wherein the step of controlling the display screen to alternately display the first face data and the second face data according to a preset display frame rate includes:
dividing a display area of the display screen into a first display area and a second display area;
according to a preset display frame rate, when the first display area is controlled to display the first face data, the second face data is displayed in the second display area;
and when the first display area is controlled to display the second face data, displaying the first face data in the second display area.
4. The shooting method according to claim 1, wherein after the step of controlling the display screen to alternately display the first face data and the second face data according to a preset display frame rate, the method further comprises:
detecting whether a shooting control instruction is received or not;
if a shooting control instruction is detected, controlling the first front camera and/or the second front camera to execute shooting operation according to the shooting control instruction;
wherein the shooting control instruction is associated with the rotation angle and the rotation direction of the first front camera and/or the second front camera.
5. The photographing method according to claim 4, wherein the step of detecting whether a photographing control instruction is received includes:
respectively controlling the first front camera and the second front camera to perform control action detection;
if the shooting control instruction is detected, controlling the first front camera and/or the second front camera to execute the shooting operation according to the shooting control instruction, and the method comprises the following steps:
if the first front-facing camera detects a first control action in the visible range of the first full-screen picture, controlling the first front-facing camera to execute shooting operation according to the first control action;
and if the second front-facing camera detects a second control action in the visible range of the second full-screen picture, controlling the second front-facing camera to execute shooting operation according to the second control action.
6. The photographing method according to claim 5, wherein the control action includes: at least one of a touch screen or spaced gesture control action, a head swing control action, a facial expression action, and a finger pointing control action.
7. The photographing method according to claim 4, wherein the step of detecting whether a photographing control instruction is received includes:
acquiring voice data collected by a microphone;
comparing the voice data collected by the microphone with pre-stored reference voice data;
if the voice data collected by the microphone is matched with at least one item of prestored reference voice data, determining that a shooting control instruction is received;
wherein the reference voice data includes: first reference voice data associated with the first front-facing camera, and second reference voice data associated with the second front-facing camera.
8. The shooting method according to claim 7, wherein the step of controlling the first front camera and/or the second front camera to execute the shooting operation according to the shooting control instruction if the shooting control instruction is detected comprises:
if the voice data collected by the microphone is matched with the first reference voice data, controlling the first front-facing camera to execute shooting operation according to the voice data collected by the microphone;
and if the voice data collected by the microphone is matched with the second reference voice data, controlling the second front camera to execute shooting operation according to the voice data collected by the microphone.
9. The photographing method according to claim 7, wherein the step of detecting whether the photographing control instruction is received further comprises, before the step of detecting whether the photographing control instruction is received:
respectively displaying first prompt information and second prompt information on the first full screen picture and the second full screen picture;
acquiring first reference voice information of a first user and second reference voice information of a second user, which are acquired by a microphone;
respectively establishing a first incidence relation between the first front camera and the first reference voice information and a second incidence relation between the second front camera and the second reference voice information;
the first prompt message is used for prompting a first user to input voice information, and the second prompt message is used for prompting a second user to input voice information.
10. The shooting method according to claim 1, wherein before the step of controlling the display screen to alternately display the first face data and the second face data at a preset display frame rate, the method further comprises:
respectively controlling the first front camera and the second front camera to track the face and determine the face position;
and adjusting the shooting angle of the first front camera and/or the second front camera according to the face position.
11. The shooting method according to claim 10, wherein the step of adjusting the shooting angle of the first front camera and/or the second front camera according to the face position comprises:
if the first face position tracked by the first front camera exceeds a first shooting range of the first front camera, controlling the first front camera to rotate by an angle until the first face position is within the first shooting range;
if the position of a second face tracked by the second front camera exceeds a second shooting range of the second front camera, controlling the second front camera to rotate by an angle until the position of the second face is within the first shooting range;
the first shooting range is the central view field range of the first front camera, and the second shooting range is the central view field range of the second front camera.
12. The shooting method according to claim 1, wherein after the step of controlling the display screen to alternately display the first face data and the second face data at a preset display frame rate, the method further comprises:
detecting a first preset action for triggering expression snapshot;
if the first preset action is detected, obtaining the expression similarity of the first face data and the second face data;
if the expression similarity is higher than a preset threshold value, respectively controlling the first front camera and the second front camera to execute shooting operation;
wherein the first preset action comprises: at least one of a touch screen or spaced gesture control action, a head swing control action, a facial expression action, and a finger pointing control action.
13. The shooting method according to claim 12, wherein the step of obtaining the expression similarity between the first face data and the second face data if the first preset action is detected includes:
if the first preset action is detected, starting a timer with preset duration;
and if the preset duration of the timer is reached, obtaining the expression similarity of the first face data and the second face data.
14. The shooting method according to claim 1, wherein after the step of controlling the display screen to alternately display the first face data and the second face data at a preset display frame rate, the method further comprises:
detecting a second preset action for finishing shooting;
if the second preset action is detected, controlling the first front-facing camera and/or the second front-facing camera to output face image data or face video data;
wherein the second preset action comprises at least one of a touch screen or an air gesture control action, a head swing control action, a facial expression action and a finger pointing control action.
15. The photographing method according to claim 14, wherein the step of detecting a second preset action for ending the photographing includes:
respectively controlling the first front camera and the second front camera to carry out second preset action detection;
if the second preset action is detected, controlling the first front-facing camera and/or the second front-facing camera to output image data or video data, wherein the step comprises the following steps:
if the first front-facing camera detects the second preset action in the visible range of the first full-screen picture, controlling the first front-facing camera to output face image data or face video data;
and if the second preset action is detected by the second front-facing camera in the visible range of the second full-screen picture, controlling the second front-facing camera to output face image data or face video data.
16. The photographing method according to claim 14, wherein the step of detecting a preset action for ending the photographing includes:
respectively detecting touch operations on a first shooting button in the first full screen picture and a second shooting button in the second full screen picture;
if the preset action is detected, controlling the first front-facing camera and/or the second front-facing camera to output face image data or face video data, wherein the step comprises the following steps:
if a first touch operation on the first shooting button is detected, controlling the first front camera to output image data or video data;
and if the second touch operation on the second shooting button is detected, controlling the second front camera to output face image data or face video data.
17. The shooting method according to claim 1, wherein after the step of controlling the display screen to alternately display the first face data and the second face data at a preset display frame rate, the method further comprises:
carrying out image synthesis on the first face data and the second face data to generate a face image;
alternatively, the first and second electrodes may be,
and carrying out video synthesis on the first face data and the second face data to generate a face video.
18. The shooting method according to claim 1, wherein the step of acquiring the first face data collected by the first front camera and the second face data collected by the second front camera further comprises:
detecting a screen sharing function and the opening state of the first front camera and/or the second front camera;
if the screen sharing function is started, and the first front camera and the second front camera are both started, first face data collected by the first front camera and second face data collected by the second front camera are acquired.
19. The photographing method according to claim 18, wherein after the step of detecting the on state of the screen sharing function, the first front camera, and/or the second front camera, further comprising:
and if the screen sharing function is detected to be closed, or the first front camera or the second front camera is detected to be closed, starting all light source assemblies in the backlight source.
20. The utility model provides a mobile terminal, mobile terminal includes display screen, leading camera of first and the leading camera of second, its characterized in that, the display screen includes: the backlight source comprises at least two light source components, and each light source component is respectively arranged on different side edges of the light guide plate; the light emitted by a first light source assembly of the at least two light source assemblies is projected to the display panel through the light guide plate at an incident angle not smaller than a first angle to form a first full screen image, and the visible range of the first full screen image is as follows: a range of angles of light exiting the display panel at an exit angle not less than the first angle; the light emitted by a second light source in the at least two light source assemblies is projected to the display panel through the light guide plate at an incident angle not smaller than a second angle to form a second full screen image, and the visible range of the second full screen image is as follows: a range of angles of light exiting the display panel at an exit angle not less than the second angle; the visible range of the second full screen image and the visible range of the first full screen image are not overlapped within a preset angle range;
wherein the mobile terminal further comprises:
the first acquisition module is used for acquiring first face data acquired by the first front camera and second face data acquired by the second front camera;
the display module is used for controlling the display screen to alternately display the first face data and the second face data according to a preset display frame rate when the self-timer sharing is started;
the first face data and the second face data are image data or video data comprising faces;
the display module further includes:
the third display sub-module is used for displaying the second face data in a first preset area of the display screen when the display screen is controlled to display the first face data in a full screen mode according to a preset display frame rate;
and the fourth display sub-module is used for displaying the first face data in a second preset area of the display screen when the display screen is controlled to display the second face data in a full screen mode.
21. The mobile terminal of claim 20, wherein the display module comprises:
the first display sub-module is used for turning on the first light source component, turning off the second light source component and controlling the display screen to display the first face data;
the second display submodule is used for closing the first light source assembly and simultaneously opening the second light source assembly after a preset time interval is set, and controlling the display screen to display the second face data;
and the first processing submodule is used for circularly executing after a preset time interval, opening the first light source assembly and closing the second light source assembly at the same time, controlling the display screen to display the first face data until the preset time interval, closing the first light source assembly and opening the second light source assembly at the same time, and controlling the display screen to display the second face data until the screen sharing function is detected to be closed, or until the first front camera or the second front camera is detected to be closed.
22. The mobile terminal of claim 20, wherein the display module is specifically configured to:
dividing a display area of the display screen into a first display area and a second display area;
according to a preset display frame rate, when the first display area is controlled to display the first face data, the second face data is displayed in the second display area;
and when the first display area is controlled to display the second face data, displaying the first face data in the second display area.
23. The mobile terminal of claim 20, wherein the mobile terminal further comprises:
the first detection module is used for detecting whether a shooting control instruction is received or not;
the first processing module is used for controlling the first front camera and/or the second front camera to execute shooting operation according to the shooting control instruction if the shooting control instruction is detected;
wherein the shooting control instruction is associated with the rotation angle and the rotation direction of the first front camera and/or the second front camera.
24. The mobile terminal of claim 23, wherein the first detecting module comprises:
the first detection submodule is used for respectively controlling the first front camera and the second front camera to carry out control action detection;
the first processing module comprises:
the first shooting submodule is used for controlling the first front-facing camera to execute shooting operation according to a first control action if the first front-facing camera detects the first control action in the visible range of the first full-screen picture;
and the second shooting submodule is used for controlling the second front-facing camera to execute shooting operation according to a second control action if the second front-facing camera detects the second control action in the visible range of the second full-screen picture.
25. The mobile terminal of claim 24, wherein the control action comprises: at least one of a touch screen or spaced gesture control action, a head swing control action, a facial expression action, and a finger pointing control action.
26. The mobile terminal of claim 24, wherein the first detection module further comprises:
the first acquisition submodule is used for acquiring voice data acquired by a microphone;
the comparison submodule is used for comparing the voice data collected by the microphone with prestored reference voice data;
the second processing sub-module is used for determining that a shooting control instruction is received if the voice data collected by the microphone is matched with at least one item of pre-stored reference voice data;
wherein the reference voice data includes: first reference voice data associated with the first front-facing camera, and second reference voice data associated with the second front-facing camera.
27. The mobile terminal of claim 26, wherein the second processing sub-module comprises:
the first shooting unit is used for controlling the first front camera to execute shooting operation according to the voice data collected by the microphone if the voice data collected by the microphone is matched with the first reference voice data;
and the second shooting unit is used for controlling the second front camera to execute shooting operation according to the voice data collected by the microphone if the voice data collected by the microphone is matched with the second reference voice data.
28. The mobile terminal of claim 26, wherein the first detection module further comprises:
the prompting sub-module is used for displaying first prompting information and second prompting information on the first full-screen picture and the second full-screen picture respectively;
the second acquisition submodule is used for acquiring the first reference voice information of the first user and the second reference voice information of the second user, which are acquired by the microphone;
the third processing submodule is used for respectively establishing a first incidence relation between the first front camera and the first reference voice information and a second incidence relation between the second front camera and the second reference voice information;
the first prompt message is used for prompting a first user to input voice information, and the second prompt message is used for prompting a second user to input voice information.
29. The mobile terminal of claim 20, wherein the mobile terminal further comprises:
the face tracking module is used for respectively controlling the first front camera and the second front camera to track the face and determine the face position;
and the adjusting module is used for adjusting the shooting angle of the first front camera and/or the second front camera according to the face position.
30. The mobile terminal of claim 29, wherein the adjusting module comprises:
the first adjusting submodule is used for controlling the first front camera to rotate in an angle until the first face position is located in a first shooting range if the first face position tracked by the first front camera exceeds the first shooting range of the first front camera;
the second adjusting submodule is used for controlling the second front camera to rotate in an angle until the position of a second face is located in the first shooting range if the position of the second face tracked by the second front camera exceeds the second shooting range of the second front camera;
the first shooting range is the central view field range of the first front camera, and the second shooting range is the central view field range of the second front camera.
31. The mobile terminal of claim 20, wherein the mobile terminal further comprises:
the second detection module is used for detecting and triggering a first preset action for expression snapshot;
the second obtaining module is used for obtaining the expression similarity of the first face data and the second face data if the first preset action is detected;
the shooting module is used for respectively controlling the first front camera and the second front camera to execute shooting operation if the expression similarity is higher than a preset threshold value;
wherein the first preset action comprises: at least one of a touch screen or spaced gesture control action, a head swing control action, a facial expression action, and a finger pointing control action.
32. The mobile terminal of claim 31, wherein the second obtaining module comprises:
the starting submodule is used for starting a timer with preset duration if the first preset action is detected;
and the third obtaining submodule is used for obtaining the expression similarity of the first face data and the second face data if the preset duration of the timer is reached.
33. The mobile terminal of claim 20, wherein the mobile terminal further comprises:
the third detection module is used for detecting a second preset action for finishing shooting;
the second processing module is used for controlling the first front camera and/or the second front camera to output face image data or face video data if the second preset action is detected;
wherein the second preset action comprises at least one of a touch screen or an air gesture control action, a head swing control action, a facial expression action and a finger pointing control action.
34. The mobile terminal of claim 33, wherein the third detection module comprises:
the second detection submodule is used for respectively controlling the first front camera and the second front camera to carry out second preset action detection;
the second processing module comprises:
the fourth processing submodule is used for controlling the first front-facing camera to output face image data or face video data if the first front-facing camera detects the second preset action in the visible range of the first full-screen picture;
and the fifth processing submodule is used for controlling the second front camera to output face image data or face video data if the second front camera detects the second preset action in the visible range of the second full-screen picture.
35. The mobile terminal of claim 33, wherein the third detection module further comprises:
the third detection submodule is used for respectively detecting touch operations on the first shooting button in the first full-screen picture and the second shooting button in the second full-screen picture;
the second processing module comprises:
the sixth processing submodule is used for controlling the first front camera to output image data or video data if the first touch operation on the first shooting button is detected;
and the seventh processing submodule is used for controlling the second front camera to output face image data or face video data if the second touch operation on the second shooting button is detected.
36. The mobile terminal of claim 20, wherein the mobile terminal further comprises:
the first synthesis module is used for carrying out image synthesis on the first face data and the second face data to generate a face image;
alternatively, the first and second electrodes may be,
and the second synthesis module is used for carrying out video synthesis on the first face data and the second face data to generate a face video.
37. The mobile terminal of claim 20, wherein the mobile terminal further comprises:
the fourth detection module is used for detecting a screen sharing function and the starting state of the first front camera and/or the second front camera;
and the third processing module is used for acquiring first face data acquired by the first front camera and second face data acquired by the second front camera if the screen sharing function is started and the first front camera and the second front camera are both started.
38. The mobile terminal of claim 37, wherein the mobile terminal further comprises:
and the fourth processing module is used for turning on all light source components in the backlight source if the screen sharing function is detected to be turned off or the first front camera or the second front camera is detected to be turned off.
39. A mobile terminal, characterized in that it comprises a processor, a memory, a computer program stored on said memory and executable on said processor, said processor implementing the steps of the shooting method according to any one of claims 1 to 19 when executing said computer program.
40. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the photographing method according to any one of claims 1 to 19.
CN201710734744.XA 2017-08-24 2017-08-24 Shooting method, mobile terminal and computer readable storage medium Active CN107333047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710734744.XA CN107333047B (en) 2017-08-24 2017-08-24 Shooting method, mobile terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710734744.XA CN107333047B (en) 2017-08-24 2017-08-24 Shooting method, mobile terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN107333047A CN107333047A (en) 2017-11-07
CN107333047B true CN107333047B (en) 2020-03-31

Family

ID=60224730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710734744.XA Active CN107333047B (en) 2017-08-24 2017-08-24 Shooting method, mobile terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN107333047B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107843993B (en) * 2017-11-09 2020-01-10 维沃移动通信有限公司 Control method for visual angle of display screen, mobile terminal and computer storage medium
CN108965714A (en) * 2018-08-01 2018-12-07 上海小蚁科技有限公司 Image-pickup method, device and computer storage media
CN108965713A (en) * 2018-08-01 2018-12-07 上海小蚁科技有限公司 Image-pickup method, device and computer readable storage medium
CN109190533B (en) * 2018-08-22 2021-07-09 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN114125143B (en) * 2020-08-31 2023-04-07 华为技术有限公司 Voice interaction method and electronic equipment
CN114401340B (en) * 2021-12-31 2023-09-26 荣耀终端有限公司 Collaborative shooting method, electronic equipment and medium thereof
CN115128856B (en) * 2022-07-05 2023-11-28 武汉华星光电技术有限公司 display device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101266362A (en) * 2008-04-23 2008-09-17 友达光电股份有限公司 Multi- view angle LCD and driving method thereof
CN106101541A (en) * 2016-06-29 2016-11-09 捷开通讯(深圳)有限公司 A kind of terminal, photographing device and image pickup method based on personage's emotion thereof
CN106210526A (en) * 2016-07-29 2016-12-07 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN106817541A (en) * 2017-01-10 2017-06-09 惠州Tcl移动通信有限公司 A kind of method and system taken pictures based on facial expression control

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101266362A (en) * 2008-04-23 2008-09-17 友达光电股份有限公司 Multi- view angle LCD and driving method thereof
CN106101541A (en) * 2016-06-29 2016-11-09 捷开通讯(深圳)有限公司 A kind of terminal, photographing device and image pickup method based on personage's emotion thereof
CN106210526A (en) * 2016-07-29 2016-12-07 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN106817541A (en) * 2017-01-10 2017-06-09 惠州Tcl移动通信有限公司 A kind of method and system taken pictures based on facial expression control

Also Published As

Publication number Publication date
CN107333047A (en) 2017-11-07

Similar Documents

Publication Publication Date Title
CN107333047B (en) Shooting method, mobile terminal and computer readable storage medium
US10416789B2 (en) Automatic selection of a wireless connectivity protocol for an input device
US9360965B2 (en) Combined touch input and offset non-touch gesture
US11128802B2 (en) Photographing method and mobile terminal
CN107528938B (en) Video call method, terminal and computer readable storage medium
JP2020530631A (en) Interaction locating methods, systems, storage media, and smart devices
CN110546601B (en) Information processing device, information processing method, and program
US20170031538A1 (en) Optical head mounted display, television portal module and methods for controlling graphical user interface
CN111580652B (en) Video playing control method and device, augmented reality equipment and storage medium
US20170046866A1 (en) Method and device for presenting operating states
US10474324B2 (en) Uninterruptable overlay on a display
CN107347140B (en) A kind of image pickup method, mobile terminal and computer readable storage medium
CN107396151B (en) A kind of video playing control method and electronic equipment
CN106406535B (en) A kind of mobile device operation method, apparatus and mobile device
WO2017206383A1 (en) Method and device for controlling terminal, and terminal
CN112911147A (en) Display control method, display control device and electronic equipment
CN111596760A (en) Operation control method and device, electronic equipment and readable storage medium
US20180260031A1 (en) Method for controlling distribution of multiple sub-screens and device using the same
CN112954209B (en) Photographing method and device, electronic equipment and medium
CN109960406B (en) Intelligent electronic equipment gesture capturing and recognizing technology based on action between fingers of two hands
CN107317994B (en) Video call method and electronic equipment
CN107515733B (en) Application program control method and mobile terminal
CN109814764B (en) Equipment control method and device and electronic equipment
CN105573493B (en) Information processing method and electronic equipment
CN104850271B (en) A kind of input method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant