CN106572299B - Camera opening method and device - Google Patents

Camera opening method and device Download PDF

Info

Publication number
CN106572299B
CN106572299B CN201610930539.6A CN201610930539A CN106572299B CN 106572299 B CN106572299 B CN 106572299B CN 201610930539 A CN201610930539 A CN 201610930539A CN 106572299 B CN106572299 B CN 106572299B
Authority
CN
China
Prior art keywords
camera
terminal
determining
shooting mode
started
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610930539.6A
Other languages
Chinese (zh)
Other versions
CN106572299A (en
Inventor
王广健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201610930539.6A priority Critical patent/CN106572299B/en
Publication of CN106572299A publication Critical patent/CN106572299A/en
Application granted granted Critical
Publication of CN106572299B publication Critical patent/CN106572299B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure relates to a camera opening method and device, and belongs to the technical field of computers. The method comprises the following steps: when a camera opening instruction is detected, acquiring current attitude data of the terminal, wherein the attitude data is used for indicating the current attitude of the terminal, and the attitude comprises at least one of height, horizontal and vertical screen states, pitching states and inclination angles; determining a camera to be started based on the current attitude data of the terminal; and opening the camera to be opened. The method and the device can enable the user to open the camera application to shoot without manually switching the camera, and reduce the operation burden of the user.

Description

Camera opening method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for opening a camera.
Background
With the development of computer technology, the application of terminals such as mobile phones in daily life is more and more common. A camera is generally disposed in the terminal, and a user can take a picture through the camera of the terminal. And when the terminal is configured with the front camera and the rear camera at the same time, if a user opens the camera application in the terminal for shooting, the terminal will default to open the rear camera, and at this time, if the user wants to perform self-shooting this time, the user needs to manually switch the current shooting mode to the self-shooting mode to open the front camera of the terminal for shooting.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a camera opening method and apparatus.
According to a first aspect of the embodiments of the present disclosure, there is provided a camera opening method, including:
when a camera opening instruction is detected, acquiring current attitude data of the terminal, wherein the attitude data is used for indicating the current attitude of the terminal, and the attitude comprises at least one of height, horizontal and vertical screen states, pitching states and inclination angles;
determining a camera to be started based on the current attitude data of the terminal;
and opening the camera to be opened.
Optionally, the determining a camera to be turned on based on the current posture data of the terminal includes:
determining a shooting mode corresponding to the current attitude data of the terminal by specifying an attitude model based on the current attitude data of the terminal;
and determining a camera to be started based on a shooting mode corresponding to the current attitude data of the terminal.
Optionally, the determining a camera to be turned on based on a shooting mode corresponding to the current posture data of the terminal includes:
judging whether a shooting mode corresponding to the attitude data of the terminal is a self-shooting mode or not;
and when the shooting mode corresponding to the current attitude data of the terminal is not the self-shooting mode, determining that the camera to be started is a rear camera.
Optionally, after determining whether the shooting mode corresponding to the gesture data of the terminal is a self-shooting mode, the method further includes:
when the shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, determining that a camera to be started is a front-facing camera; alternatively, the first and second electrodes may be,
when the shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, determining the shooting probability of a front camera; when the shooting probability of the front camera is greater than or equal to the designated probability, determining that the camera to be started is the front camera; when the shooting probability of the front camera is smaller than the designated probability, determining that the camera to be started is a rear camera; alternatively, the first and second electrodes may be,
when a shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, acquiring an image through a front-facing camera; and determining the camera to be started based on the acquired image.
Optionally, the determining a camera to be turned on based on the acquired image includes:
judging whether the acquired image contains a face image or not;
and when the acquired image does not contain the face image, determining that the camera to be started is a rear camera.
Optionally, after determining whether the acquired image includes a face image, the method further includes:
when the acquired image contains a face image, determining that a camera to be started is a front-facing camera; alternatively, the first and second electrodes may be,
when the collected image contains a face image, identifying the facial expression of the face image; when the recognized facial expression is the designated expression, determining that the camera to be started is a front-facing camera; and when the recognized facial expression is not the designated expression, determining that the camera to be started is a rear camera.
Optionally, before determining, based on the current posture data of the terminal, a shooting mode corresponding to the current posture data of the terminal by specifying a posture model, the method further includes:
acquiring a plurality of training posture data sets, wherein each training posture data set comprises at least one training posture data, and each training posture data set corresponds to one shooting mode;
and training the attitude model to be trained by using the plurality of training attitude data sets to obtain the specified attitude model.
According to a second aspect of the embodiments of the present disclosure, there is provided a camera opening device, the device including:
the terminal comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring current attitude data of the terminal when a camera opening instruction is detected, the attitude data is used for indicating the current attitude of the terminal, and the attitude comprises at least one of height, horizontal and vertical screen states, pitching states and inclination angles;
the determining module is used for determining a camera to be started based on the current attitude data of the terminal;
and the starting mode is used for starting the camera to be started.
Optionally, the determining module includes:
the first determining submodule is used for determining a shooting mode corresponding to the current attitude data of the terminal through a specified attitude model based on the current attitude data of the terminal;
and the second determining submodule is used for determining the camera to be started based on the shooting mode corresponding to the current attitude data of the terminal.
Optionally, the second determining submodule is configured to:
judging whether a shooting mode corresponding to the attitude data of the terminal is a self-shooting mode or not;
and when the shooting mode corresponding to the current attitude data of the terminal is not the self-shooting mode, determining that the camera to be started is a rear camera.
Optionally, the second determining sub-module is further configured to:
when the shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, determining that a camera to be started is a front-facing camera; alternatively, the first and second electrodes may be,
when the shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, determining the shooting probability of a front camera; when the shooting probability of the front camera is greater than or equal to the designated probability, determining that the camera to be started is the front camera; when the shooting probability of the front camera is smaller than the designated probability, determining that the camera to be started is a rear camera; alternatively, the first and second electrodes may be,
when a shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, acquiring an image through a front-facing camera; and determining the camera to be started based on the acquired image.
Optionally, the second determining sub-module is further configured to:
judging whether the acquired image contains a face image or not;
and when the acquired image does not contain the face image, determining that the camera to be started is a rear camera.
Optionally, the second determining sub-module is further configured to:
when the acquired image contains a face image, determining that a camera to be started is a front-facing camera; alternatively, the first and second electrodes may be,
when the collected image contains a face image, identifying the facial expression of the face image; when the recognized facial expression is the designated expression, determining that the camera to be started is a front-facing camera; and when the recognized facial expression is not the designated expression, determining that the camera to be started is a rear camera.
Optionally, the determining module further comprises:
the acquisition submodule is used for acquiring a plurality of training posture data sets, each training posture data set comprises at least one training posture data, and each training posture data set corresponds to one shooting mode;
and the training submodule is used for training the posture model to be trained by using the plurality of training posture data sets to obtain the specified posture model.
According to a third aspect of the embodiments of the present disclosure, there is provided a camera opening device, the device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
when a camera opening instruction is detected, acquiring current attitude data of the terminal, wherein the attitude data is used for indicating the current attitude of the terminal, and the attitude comprises at least one of height, horizontal and vertical screen states, pitching states and inclination angles;
determining a camera to be started based on the current attitude data of the terminal;
and opening the camera to be opened.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: in the embodiment of the disclosure, when a camera opening instruction is detected, current posture data of a terminal is obtained, a camera to be opened is determined based on the current posture data of the terminal, and then the camera to be opened is opened.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart illustrating a method for turning on a camera according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating another camera turn-on method according to an exemplary embodiment.
Fig. 3A is a block diagram illustrating a camera opening device according to an exemplary embodiment.
FIG. 3B is a block diagram illustrating a determination module in accordance with an exemplary embodiment.
FIG. 3C is a block diagram illustrating another determination module in accordance with an exemplary embodiment.
Fig. 4 is a block diagram illustrating another camera opening device in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
Before explaining the embodiments of the present disclosure in detail, an application scenario of the embodiments of the present disclosure will be explained. In the related art, when the terminal is configured with a front camera and a rear camera at the same time, if a user opens a camera application in the terminal to shoot, the terminal will default to open the rear camera, at this time, if the user wants to perform self-shooting this time, the user needs to manually switch the current shooting mode to a self-shooting mode to open the front camera of the terminal to shoot, and the operation burden of the user is large in the process of opening the camera, so that the embodiment of the disclosure provides a camera opening method to reduce the operation burden of the user.
Fig. 1 is a flowchart illustrating a camera opening method for use in a terminal according to an exemplary embodiment. As shown in fig. 1, the method includes the following steps.
In step 101, when a camera opening instruction is detected, current attitude data of the terminal is acquired, the attitude data is used for indicating the current attitude of the terminal, and the attitude includes at least one of height, horizontal and vertical screen states, pitching state and inclination angle.
In step 102, a camera to be turned on is determined based on the current attitude data of the terminal.
In step 103, the camera to be turned on is turned on.
In the embodiment of the disclosure, when a camera opening instruction is detected, current posture data of a terminal is obtained, a camera to be opened is determined based on the current posture data of the terminal, and then the camera to be opened is opened.
Optionally, determining a camera to be turned on based on the current posture data of the terminal includes:
determining a shooting mode corresponding to the current attitude data of the terminal by specifying an attitude model based on the current attitude data of the terminal;
and determining a camera to be started based on a shooting mode corresponding to the current attitude data of the terminal.
Optionally, determining a camera to be turned on based on a shooting mode corresponding to the current attitude data of the terminal, including:
judging whether a shooting mode corresponding to the attitude data of the terminal is a self-shooting mode or not;
and when the shooting mode corresponding to the current attitude data of the terminal is not the self-shooting mode, determining that the camera to be started is a rear camera.
Optionally, after determining whether the shooting mode corresponding to the attitude data of the terminal is a self-shooting mode, the method further includes:
when a shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, determining that a camera to be started is a front-facing camera; alternatively, the first and second electrodes may be,
when the shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, determining the shooting probability of the front camera; when the shooting probability of the front camera is greater than or equal to the designated probability, determining that the camera to be started is the front camera; when the shooting probability of the front camera is smaller than the designated probability, determining that the camera to be started is a rear camera; alternatively, the first and second electrodes may be,
when a shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, acquiring an image through a front camera; and determining the camera to be started based on the acquired image.
Optionally, determining a camera to be turned on based on the acquired image includes:
judging whether the collected image contains a face image or not;
and when the acquired image does not contain the face image, determining that the camera to be started is a rear camera.
Optionally, after determining whether the acquired image includes a face image, the method further includes:
when the acquired image contains a face image, determining that a camera to be started is a front-facing camera; alternatively, the first and second electrodes may be,
when the collected image contains a face image, recognizing the facial expression of the face image; when the recognized facial expression is the designated expression, determining that the camera to be started is a front-facing camera; and when the recognized facial expression is not the designated expression, determining that the camera to be started is a rear camera.
Optionally, before determining, based on the current posture data of the terminal, a shooting mode corresponding to the current posture data of the terminal by specifying a posture model, the method further includes:
acquiring a plurality of training posture data sets, wherein each training posture data set comprises at least one training posture data, and each training posture data set corresponds to one shooting mode;
and training the posture model to be trained by using a plurality of training posture data sets to obtain the specified posture model.
All the above optional technical solutions can be combined arbitrarily to form optional embodiments of the present disclosure, and the embodiments of the present disclosure are not described in detail again.
Fig. 2 is a flowchart illustrating a camera turning-on method for use in a terminal according to an exemplary embodiment. As shown in fig. 2, the method includes the following steps.
In step 201, when a camera opening instruction is detected, current attitude data of the terminal is acquired.
It should be noted that the camera opening instruction is used to instruct to open the camera, and the camera opening instruction may be triggered by a user, where the user may trigger through a specified operation, and the specified operation may be a single-click operation, a double-click operation, a voice operation, and the like, which is not specifically limited in this embodiment of the present disclosure. For example, when a user opens a camera application in a terminal through a single-click operation, the single-click operation may trigger the camera opening instruction to open a camera.
In addition, the attitude data is used to indicate an attitude in which the terminal is currently located, and the attitude may include at least one of a height, a horizontal and vertical screen state, a pitch state, and a tilt angle, which is not specifically limited in this disclosure. The height is the distance between the terminal and the nearest barrier in the gravity direction; the horizontal and vertical screen states comprise a vertical screen, an inverted vertical screen, a left horizontal screen and a right horizontal screen; the pitching state comprises a pitching state and a pitching state, wherein the pitching state represents that the screen of the terminal faces downwards, and the pitching state represents that the screen of the terminal faces upwards; the inclination angle comprises a horizontal inclination angle and a vertical inclination angle, the horizontal inclination angle is an included angle between the terminal and the horizontal plane, and the vertical inclination angle is an included angle between the terminal and the vertical plane.
When acquiring the current attitude data of the terminal, the current attitude data of the terminal may be acquired through a gravity sensor, a direction sensor, a three-axis gyroscope, a distance sensor, and the like configured in the terminal, which is not specifically limited in this embodiment of the disclosure. In addition, the operation of acquiring the current attitude data of the terminal may refer to the related art, which is not described in detail in the embodiments of the present disclosure.
In step 202, a camera to be turned on is determined based on the current attitude data of the terminal.
Because the front camera and the rear camera are generally different in purpose, if the front camera is generally used for self-shooting, and the rear camera is generally used for shooting, the posture of the terminal when the user uses the front camera and the rear camera is different, if the user uses the front camera, the height of the terminal is generally higher than the height of the terminal when the user uses the rear camera, and the posture of the terminal can indicate which camera the terminal is started to a certain extent, so that the camera to be started can be determined based on the current posture data of the terminal. When the camera to be turned on is determined based on the current attitude data of the terminal, the shooting mode corresponding to the current attitude data of the terminal can be determined by specifying the attitude model based on the current attitude data of the terminal, and the camera to be turned on is determined based on the shooting mode corresponding to the current attitude data of the terminal.
It should be noted that the designated pose model is used to determine a shooting mode corresponding to the pose data, and the designated pose model may distinguish multiple shooting modes, where the multiple shooting modes may include a self-timer mode, a shooting mode, and the like, and this is not specifically limited in this embodiment of the disclosure.
In addition, before the shooting mode corresponding to the current attitude data of the terminal is determined through the specified attitude model based on the current attitude data of the terminal, the specified attitude model can be established in advance, and when the specified attitude model is established, a plurality of training attitude data sets can be obtained first, and then the attitude model to be trained is trained by using the plurality of training attitude data sets to obtain the specified attitude model.
It should be noted that each of the plurality of training posture data sets includes at least one training posture data, and each training posture data set corresponds to one shooting mode, that is, each training posture data is a posture data with a shooting mode identifier.
In addition, when a plurality of training posture data sets are acquired, the posture data of the terminal and the shooting mode corresponding to the camera used at this time may be acquired each time a shooting operation is performed, the acquired shooting mode is determined as the shooting mode corresponding to the posture data acquired at this time, and then the posture data with the shooting mode identifier is determined as the training posture data. The front camera may correspond to a self-photographing mode, and the rear camera may correspond to a shooting mode, which is not specifically limited in the embodiments of the present disclosure.
Furthermore, when the plurality of training pose data sets are used to train the pose model to be trained, the embodiment of the present disclosure may train the pose model to be trained in a supervised learning manner, where the supervised learning means a process of continuously adjusting parameters in the pose model by specifying an adjustment algorithm to make the pose model achieve the required performance given the input and output of the pose model. In the embodiment of the present disclosure, the input is training posture data, and the output is a shooting mode corresponding to the training posture data.
It should be noted that the designated adjustment algorithm may be preset, for example, the designated adjustment algorithm may be a random gradient descent algorithm, and this is not specifically limited in this disclosure.
Based on the shooting mode corresponding to the current attitude data of the terminal, the operation of determining the camera to be turned on may be: judging whether a shooting mode corresponding to the attitude data of the terminal is a self-shooting mode or not; and when the shooting mode corresponding to the current attitude data of the terminal is not the self-shooting mode, determining that the camera to be started is a rear camera.
When the shooting mode corresponding to the current attitude data of the terminal is not the self-shooting mode, the user is not likely to want to use the front camera to carry out self-shooting, so that the camera to be turned on can be determined to be the rear camera.
Further, after judging whether the shooting mode corresponding to the attitude data of the terminal is the self-shooting mode, when the shooting mode corresponding to the attitude data of the terminal is the self-shooting mode, the camera to be turned on can be determined through the following three ways.
The first mode is as follows: and when the shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, determining that the camera to be started is a front-facing camera.
When the shooting mode corresponding to the current attitude data of the terminal is the self-shooting mode, it is indicated that the user is more likely to want to use the front camera to carry out self-shooting, so that the camera to be turned on can be determined to be the front camera. At the moment, after the shooting mode corresponding to the current attitude data of the terminal is determined, the camera to be started can be determined only according to the shooting mode corresponding to the current attitude data of the terminal, and the determination efficiency is high.
The second mode is as follows: when the shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, determining the shooting probability of the front camera; when the shooting probability of the front camera is greater than or equal to the designated probability, determining that the camera to be started is the front camera; and when the shooting probability of the front camera is smaller than the designated probability, determining that the camera to be started is the rear camera.
It should be noted that the designated probability may be preset, for example, the designated probability may be 0.7, 0.8, and the like, which is not specifically limited in this disclosure.
When the shooting probability of the front-facing camera is determined, the shooting probability of the front-facing camera may be obtained by dividing the number of times of shooting by using the front-facing camera by the total number of times of shooting by the terminal. In addition, when the shooting probability of the front camera is greater than or equal to the designated probability, the user often uses the front camera to shoot, and when the shooting probability of the front camera is less than the designated probability, the user infrequently uses the front camera to shoot.
When the shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, if the shooting probability of the front camera is greater than or equal to the specified probability, it is indicated that the user probably wants to use the front camera to carry out self-shooting, and the camera to be started can be determined to be the front camera. And if the shooting probability of the front camera is smaller than the specified probability, the fact that the user is less likely to want to use the front camera to take a self-timer is indicated, and the camera to be started can be determined to be the rear camera. At this time, after the terminal determines that the shooting mode corresponding to the current attitude data of the terminal is the self-shooting mode, the camera to be started can be further determined according to the shooting probability of the front camera, and the determination accuracy is high.
The third mode is as follows: when a shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, acquiring an image through a front camera; and determining the camera to be started based on the acquired image.
The operation of acquiring an image through a front camera may refer to the related art, which is not described in detail in the embodiments of the present disclosure.
Based on the acquired image, the operation of determining the camera to be turned on may be: judging whether the collected image contains a face image or not; and when the acquired image does not contain the face image, determining that the camera to be started is a rear camera.
Because the user usually shoots the face of the user when using the front camera to shoot the self, when the collected image does not contain the face image, the user is less likely to want to use the front camera to shoot the self, and the camera to be started can be determined to be the rear camera.
The operation of determining whether the acquired image includes a face image may refer to related technologies, which are not described in detail in the embodiments of the present disclosure.
Further, after judging whether the acquired image contains a face image or not, when the acquired image contains the face image, determining that the camera to be started is a front-facing camera; or when the acquired image contains a face image, identifying the face expression of the face image, when the identified face expression is a specified expression, determining that the camera to be started is a front camera, and when the identified face expression is not the specified expression, determining that the camera to be started is a rear camera.
On one hand, because the user usually shoots the face of the user when using the front camera to shoot the self, when the collected image contains the face image, the user is more likely to want to use the front camera to shoot the self, and the camera to be started can be determined to be the front camera. At the moment, after the terminal collects the image through the front camera, the camera to be started can be determined only according to the existence of the face image in the image, and the determination efficiency is high.
On the other hand, when the user uses the front camera to shoot the face of the user, the facial expression of the user is usually fixed, for example, the facial expression of the user is usually smiling, so when the collected image contains a facial image, if the recognized facial expression is a specified expression, it indicates that the user is likely to want to use the front camera to shoot by self, the camera to be turned on can be determined to be the front camera, and if the recognized facial expression is not the specified expression, it indicates that the user is less likely to want to use the front camera to shoot by self, and it can be determined that the camera to be turned on is the rear camera. At this time, after the terminal determines that the acquired image contains the face image, the camera to be started can be further determined according to the face expression of the face image, and the determination accuracy is high.
It should be noted that the designated expression may be preset, for example, the designated expression may be smiling, and this is not specifically limited in this disclosure. In addition, in actual use, the specified expression may be set by the terminal, and when the terminal sets the specified expression, the terminal may perform facial expression recognition on all facial images obtained by the user using the front-facing camera, and determine the facial expression with the largest occurrence frequency in the recognized facial expressions as the specified expression.
When the acquired image includes a face image, the operation of recognizing the facial expression of the face image may refer to related technologies, which are not described in detail in the embodiments of the present disclosure.
It should be noted that, in the embodiment of the present disclosure, the terminal may determine the camera to be turned on according to the shooting mode corresponding to the current posture data of the terminal directly, and may assist in determining the camera to be turned on by using the image, the face image, or the facial expression acquired by the front-facing camera on the basis of determining the shooting mode corresponding to the current posture data of the terminal. And when the terminal directly determines the camera to be started according to the shooting mode corresponding to the current attitude data of the terminal, the determination efficiency is higher. When the terminal determines the camera to be started by using the images, the face images or the facial expressions acquired by the front-facing camera on the basis of determining the shooting mode corresponding to the current attitude data of the terminal, the determination accuracy is high.
In step 203, the camera to be turned on is turned on.
It should be noted that, reference may be made to related technologies for the operation of turning on the camera to be turned on, which is not described in detail in the embodiments of the present disclosure.
In the embodiment of the disclosure, when a camera opening instruction is detected, current posture data of a terminal is obtained, a camera to be opened is determined based on the current posture data of the terminal, and then the camera to be opened is opened.
Fig. 3A is a block diagram illustrating a camera opening device according to an exemplary embodiment. Referring to fig. 3A, the apparatus includes an acquisition module 301, a determination module 302, and an opening module 303.
The acquiring module 301 is configured to acquire current attitude data of the terminal when a camera opening instruction is detected, where the attitude data is used to indicate a current attitude of the terminal, and the attitude includes at least one of a height, a horizontal and vertical screen state, a pitch state, and an inclination angle;
a determining module 302, configured to determine a camera to be turned on based on current attitude data of the terminal;
the start mode 303 is used to start a camera to be started.
Optionally, referring to fig. 3B, the determination module 302 includes a first determination submodule 3021 and a second determination submodule 3022.
A first determining submodule 3021, configured to determine, based on current attitude data of the terminal, a shooting mode corresponding to the current attitude data of the terminal by specifying an attitude model;
the second determining submodule 3022 is configured to determine a camera to be turned on based on a shooting mode corresponding to the current posture data of the terminal.
Optionally, the second determining submodule 3022 is configured to:
judging whether a shooting mode corresponding to the attitude data of the terminal is a self-shooting mode or not;
and when the shooting mode corresponding to the current attitude data of the terminal is not the self-shooting mode, determining that the camera to be started is a rear camera.
Optionally, the second determining submodule 3022 is further configured to:
when a shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, determining that a camera to be started is a front-facing camera; alternatively, the first and second electrodes may be,
when the shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, determining the shooting probability of the front camera; when the shooting probability of the front camera is greater than or equal to the designated probability, determining that the camera to be started is the front camera; when the shooting probability of the front camera is smaller than the designated probability, determining that the camera to be started is a rear camera; alternatively, the first and second electrodes may be,
when a shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, acquiring an image through a front camera; and determining the camera to be started based on the acquired image.
Optionally, the second determining submodule 3022 is further configured to:
judging whether the collected image contains a face image or not;
and when the acquired image does not contain the face image, determining that the camera to be started is a rear camera.
Optionally, the second determining submodule 3022 is further configured to:
when the acquired image contains a face image, determining that a camera to be started is a front-facing camera; alternatively, the first and second electrodes may be,
when the collected image contains a face image, recognizing the facial expression of the face image; when the recognized facial expression is the designated expression, determining that the camera to be started is a front-facing camera; and when the recognized facial expression is not the designated expression, determining that the camera to be started is a rear camera.
Optionally, referring to fig. 3C, the determination module 302 further includes an acquisition sub-module 3023 and a training sub-module 3024.
An obtaining submodule 3023, configured to obtain a plurality of training posture data sets, where each training posture data set includes at least one training posture data, and each training posture data set corresponds to one shooting mode;
the training submodule 3024 is configured to train the posture model to be trained by using the multiple training posture data sets, so as to obtain an assigned posture model.
In the embodiment of the disclosure, when a camera opening instruction is detected, current posture data of a terminal is obtained, a camera to be opened is determined based on the current posture data of the terminal, and then the camera to be opened is opened.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 4 is a block diagram illustrating an apparatus 400 for camera turn-on according to an exemplary embodiment. For example, the apparatus 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 4, the apparatus 400 may include one or more of the following components: processing components 402, memory 404, power components 406, multimedia components 408, audio components 410, input/output (I/O) interfaces 412, sensor components 414, and communication components 416.
The processing component 402 generally controls overall operation of the apparatus 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 402 may include one or more processors 420 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 can include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
The memory 404 is configured to store various types of data to support operations at the apparatus 400. Examples of such data include instructions for any application or method operating on the device 400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 404 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power supply components 406 provide power to the various components of device 400. The power components 406 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power supplies for the apparatus 400.
The multimedia component 408 includes a screen that provides an output interface between the device 400 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 408 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 400 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 410 is configured to output and/or input audio signals. For example, audio component 410 includes a Microphone (MIC) configured to receive external audio signals when apparatus 400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 404 or transmitted via the communication component 416. In some embodiments, audio component 410 also includes a speaker for outputting audio signals.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, an open button, and a lock button.
The sensor component 414 includes one or more sensors for providing various aspects of status assessment for the apparatus 400. For example, the sensor assembly 414 may detect an open/closed state of the apparatus 400, the relative positioning of the components, such as a display and keypad of the apparatus 400, the sensor assembly 414 may also detect a change in the position of the apparatus 400 or a component of the apparatus 400, the presence or absence of user contact with the apparatus 400, orientation or acceleration/deceleration of the apparatus 400, and a change in the temperature of the apparatus 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate wired or wireless communication between the apparatus 400 and other devices. The apparatus 400 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 404 comprising instructions, executable by the processor 420 of the apparatus 400 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions therein, which when executed by a processor of a mobile terminal, enable the mobile terminal to perform a camera turn-on method, the method comprising:
when a camera opening instruction is detected, acquiring current attitude data of the terminal, wherein the attitude data is used for indicating the current attitude of the terminal, and the attitude comprises at least one of height, horizontal and vertical screen states, pitching states and inclination angles;
determining a camera to be started based on the current attitude data of the terminal;
and opening the camera to be opened.
Optionally, determining a camera to be turned on based on the current posture data of the terminal includes:
determining a shooting mode corresponding to the current attitude data of the terminal by specifying an attitude model based on the current attitude data of the terminal;
and determining a camera to be started based on a shooting mode corresponding to the current attitude data of the terminal.
Optionally, determining a camera to be turned on based on a shooting mode corresponding to the current attitude data of the terminal, including:
judging whether a shooting mode corresponding to the attitude data of the terminal is a self-shooting mode or not;
and when the shooting mode corresponding to the current attitude data of the terminal is not the self-shooting mode, determining that the camera to be started is a rear camera.
Optionally, after determining whether the shooting mode corresponding to the attitude data of the terminal is a self-shooting mode, the method further includes:
when a shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, determining that a camera to be started is a front-facing camera; alternatively, the first and second electrodes may be,
when the shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, determining the shooting probability of the front camera; when the shooting probability of the front camera is greater than or equal to the designated probability, determining that the camera to be started is the front camera; when the shooting probability of the front camera is smaller than the designated probability, determining that the camera to be started is a rear camera; alternatively, the first and second electrodes may be,
when a shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, acquiring an image through a front camera; and determining the camera to be started based on the acquired image.
Optionally, determining a camera to be turned on based on the acquired image includes:
judging whether the collected image contains a face image or not;
and when the acquired image does not contain the face image, determining that the camera to be started is a rear camera.
Optionally, after determining whether the acquired image includes a face image, the method further includes:
when the acquired image contains a face image, determining that a camera to be started is a front-facing camera; alternatively, the first and second electrodes may be,
when the collected image contains a face image, recognizing the facial expression of the face image; when the recognized facial expression is the designated expression, determining that the camera to be started is a front-facing camera; and when the recognized facial expression is not the designated expression, determining that the camera to be started is a rear camera.
Optionally, before determining, based on the current posture data of the terminal, a shooting mode corresponding to the current posture data of the terminal by specifying a posture model, the method further includes:
acquiring a plurality of training posture data sets, wherein each training posture data set comprises at least one training posture data, and each training posture data set corresponds to one shooting mode;
and training the posture model to be trained by using a plurality of training posture data sets to obtain the specified posture model.
In the embodiment of the disclosure, when a camera opening instruction is detected, current posture data of a terminal is obtained, a camera to be opened is determined based on the current posture data of the terminal, and then the camera to be opened is opened.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (11)

1. A camera opening method is characterized by comprising the following steps:
when a camera opening instruction is detected, acquiring current attitude data of the terminal, wherein the attitude data is used for indicating the current attitude of the terminal, and the attitude comprises at least one of height, horizontal and vertical screen states and pitching states;
determining a shooting mode corresponding to the current attitude data of the terminal by specifying an attitude model based on the current attitude data of the terminal;
judging whether a shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode or not;
when a shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, acquiring an image through a front-facing camera, and determining a camera to be started based on the acquired image;
opening the camera to be opened;
the camera to be turned on is determined based on the acquired image, and the method comprises the following steps:
judging whether the acquired image contains a face image or not;
when the collected image contains a face image, identifying the facial expression of the face image; when the recognized facial expression is the designated expression, determining that the camera to be started is a front-facing camera; and when the recognized facial expression is not the designated expression, determining that the camera to be started is a rear camera, wherein the designated expression is the facial expression with the largest occurrence frequency in the facial image obtained by the user by using the front camera.
2. The method of claim 1, wherein after determining whether the shooting mode corresponding to the pose data of the terminal is a self-timer shooting mode, further comprising:
and when the shooting mode corresponding to the current attitude data of the terminal is not the self-shooting mode, determining that the camera to be started is a rear camera.
3. The method of claim 1, wherein determining the camera to turn on based on the captured image further comprises:
and when the acquired image does not contain the face image, determining that the camera to be started is a rear camera.
4. The method of claim 1, wherein after determining whether the acquired image includes a face image, further comprising:
and when the acquired image contains a face image, determining that the camera to be started is a front-facing camera.
5. The method according to any one of claims 2-4, wherein before determining the shooting mode corresponding to the current attitude data of the terminal by specifying an attitude model based on the current attitude data of the terminal, further comprising:
acquiring a plurality of training posture data sets, wherein each training posture data set comprises at least one training posture data, and each training posture data set corresponds to one shooting mode;
and training the attitude model to be trained by using the plurality of training attitude data sets to obtain the specified attitude model.
6. A camera opening device, the device comprising:
the terminal comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring current attitude data of the terminal when a camera opening instruction is detected, the attitude data is used for indicating the current attitude of the terminal, and the attitude comprises at least one of height, horizontal and vertical screen states and pitching states;
the determining module is used for determining a camera to be started based on the current attitude data of the terminal;
the starting mode is used for starting the camera to be started;
wherein the determining module comprises:
the first determining submodule is used for determining a shooting mode corresponding to the current attitude data of the terminal through a specified attitude model based on the current attitude data of the terminal;
the second determining submodule is used for judging whether the shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode; when a shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, acquiring an image through a front-facing camera, and determining a camera to be started based on the acquired image;
the second determination submodule is further configured to:
judging whether the acquired image contains a face image or not;
when the collected image contains a face image, identifying the facial expression of the face image; when the recognized facial expression is the designated expression, determining that the camera to be started is a front-facing camera; and when the recognized facial expression is not the designated expression, determining that the camera to be started is a rear camera, wherein the designated expression is the facial expression with the largest occurrence frequency in the facial image obtained by the user by using the front camera.
7. The apparatus of claim 6, wherein the second determination submodule is further to:
and when the shooting mode corresponding to the current attitude data of the terminal is not the self-shooting mode, determining that the camera to be started is a rear camera.
8. The apparatus of claim 6, wherein the second determination submodule is further to:
and when the acquired image does not contain the face image, determining that the camera to be started is a rear camera.
9. The apparatus of claim 6, wherein the second determination submodule is further to: and when the acquired image contains a face image, determining that the camera to be started is a front-facing camera.
10. The apparatus of any one of claims 6-9, wherein the determining module further comprises:
the acquisition submodule is used for acquiring a plurality of training posture data sets, each training posture data set comprises at least one training posture data, and each training posture data set corresponds to one shooting mode;
and the training submodule is used for training the posture model to be trained by using the plurality of training posture data sets to obtain the specified posture model.
11. A camera opening device, the device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
when a camera opening instruction is detected, acquiring current attitude data of the terminal, wherein the attitude data is used for indicating the current attitude of the terminal, and the attitude comprises at least one of height, horizontal and vertical screen states and pitching states;
determining a shooting mode corresponding to the current attitude data of the terminal by specifying an attitude model based on the current attitude data of the terminal;
judging whether a shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode or not;
when a shooting mode corresponding to the current attitude data of the terminal is a self-shooting mode, acquiring an image through a front-facing camera, and determining a camera to be started based on the acquired image;
opening the camera to be opened;
the camera to be turned on is determined based on the acquired image, and the method comprises the following steps:
judging whether the acquired image contains a face image or not;
when the collected image contains a face image, identifying the facial expression of the face image; when the recognized facial expression is the designated expression, determining that the camera to be started is a front-facing camera; and when the recognized facial expression is not the designated expression, determining that the camera to be started is a rear camera, wherein the designated expression is the facial expression with the largest occurrence frequency in the facial image obtained by the user by using the front camera.
CN201610930539.6A 2016-10-31 2016-10-31 Camera opening method and device Active CN106572299B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610930539.6A CN106572299B (en) 2016-10-31 2016-10-31 Camera opening method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610930539.6A CN106572299B (en) 2016-10-31 2016-10-31 Camera opening method and device

Publications (2)

Publication Number Publication Date
CN106572299A CN106572299A (en) 2017-04-19
CN106572299B true CN106572299B (en) 2020-02-28

Family

ID=58533792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610930539.6A Active CN106572299B (en) 2016-10-31 2016-10-31 Camera opening method and device

Country Status (1)

Country Link
CN (1) CN106572299B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107168620A (en) * 2017-04-21 2017-09-15 北京小米移动软件有限公司 Method, device, terminal device and the computer-readable recording medium of control terminal
CN108737631B (en) * 2017-04-25 2021-08-03 北京小米移动软件有限公司 Method and device for rapidly acquiring image
CN107105160A (en) * 2017-04-25 2017-08-29 维沃移动通信有限公司 A kind of method, system and mobile terminal for starting camera
CN107179871A (en) * 2017-04-28 2017-09-19 广东欧珀移动通信有限公司 Fingerprint recognition region display methods and Related product
CN107333004A (en) * 2017-07-25 2017-11-07 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN107333067A (en) * 2017-08-04 2017-11-07 维沃移动通信有限公司 The control method and terminal of a kind of camera
CN107613193A (en) * 2017-08-24 2018-01-19 维沃移动通信有限公司 A kind of camera control method and mobile terminal
CN107517349A (en) * 2017-09-07 2017-12-26 深圳支点电子智能科技有限公司 Mobile terminal and Related product with camera function
CN107613197A (en) * 2017-09-07 2018-01-19 深圳支点电子智能科技有限公司 One kind control camera photographic method and mobile terminal
CN107741786A (en) * 2017-10-25 2018-02-27 深圳市金立通信设备有限公司 A kind of method, terminal and computer-readable recording medium for starting camera
CN108024063A (en) * 2017-12-15 2018-05-11 苏州燕云网络技术有限公司 The image pickup method and device of mobile terminal
CN108174086B (en) * 2017-12-25 2020-09-08 Oppo广东移动通信有限公司 Shooting method and related product
CN108156306B (en) * 2017-12-25 2020-06-02 Oppo广东移动通信有限公司 Unlocking method and related product
CN110035168A (en) * 2018-01-11 2019-07-19 中兴通讯股份有限公司 Starting method, terminal and the storage medium of front camera
CN108076294B (en) * 2018-01-26 2019-11-19 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN109246284B (en) * 2018-09-30 2023-07-21 联想(北京)有限公司 Face unlocking method and electronic equipment
CN111105792A (en) 2018-10-29 2020-05-05 华为技术有限公司 Voice interaction processing method and device
CN112689990A (en) * 2018-11-09 2021-04-20 深圳市柔宇科技股份有限公司 Photographing control method, electronic device, and computer-readable storage medium
CN111367403A (en) * 2018-12-29 2020-07-03 香港乐蜜有限公司 Interaction method and device
CN109788144B (en) * 2019-03-27 2021-04-20 维沃移动通信有限公司 Shooting method and terminal equipment
CN109922272A (en) * 2019-04-23 2019-06-21 广东小天才科技有限公司 Screening-mode switching method, device, wrist-watch and medium based on smartwatch
CN111953927B (en) * 2019-05-17 2022-06-24 成都鼎桥通信技术有限公司 Handheld terminal video return method and camera device
WO2020237545A1 (en) * 2019-05-29 2020-12-03 深圳市欢太科技有限公司 Camera starting method and related apparatus
CN110456938B (en) 2019-06-28 2021-01-29 华为技术有限公司 False touch prevention method for curved screen and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103079034A (en) * 2013-01-06 2013-05-01 北京百度网讯科技有限公司 Perception shooting method and system
CN103795864A (en) * 2014-01-29 2014-05-14 华为技术有限公司 Method for selecting front camera and back camera of mobile terminal and mobile terminal
CN104536559A (en) * 2014-11-25 2015-04-22 深圳市金立通信设备有限公司 Terminal control method
CN105282432A (en) * 2014-07-21 2016-01-27 联想(新加坡)私人有限公司 Camera mode selection based on context
CN105827955A (en) * 2016-03-14 2016-08-03 乐卡汽车智能科技(北京)有限公司 Camera switching method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8797265B2 (en) * 2011-03-09 2014-08-05 Broadcom Corporation Gyroscope control and input/output device selection in handheld mobile devices
CN102413282B (en) * 2011-10-26 2015-02-18 惠州Tcl移动通信有限公司 Self-shooting guidance method and equipment
CN103618851A (en) * 2013-11-08 2014-03-05 杨睿琦 Self-photographing method and terminal
CN104935698B (en) * 2015-06-23 2019-02-22 上海卓易科技股份有限公司 A kind of image pickup method of intelligent terminal, filming apparatus and smart phone

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103079034A (en) * 2013-01-06 2013-05-01 北京百度网讯科技有限公司 Perception shooting method and system
CN103795864A (en) * 2014-01-29 2014-05-14 华为技术有限公司 Method for selecting front camera and back camera of mobile terminal and mobile terminal
CN105282432A (en) * 2014-07-21 2016-01-27 联想(新加坡)私人有限公司 Camera mode selection based on context
CN104536559A (en) * 2014-11-25 2015-04-22 深圳市金立通信设备有限公司 Terminal control method
CN105827955A (en) * 2016-03-14 2016-08-03 乐卡汽车智能科技(北京)有限公司 Camera switching method and device

Also Published As

Publication number Publication date
CN106572299A (en) 2017-04-19

Similar Documents

Publication Publication Date Title
CN106572299B (en) Camera opening method and device
CN106797416B (en) Screen control method and device
US9674395B2 (en) Methods and apparatuses for generating photograph
EP3032821B1 (en) Method and device for shooting a picture
CN105488527B (en) Image classification method and device
US11061202B2 (en) Methods and devices for adjusting lens position
CN107102772B (en) Touch control method and device
US9491371B2 (en) Method and device for configuring photographing parameters
EP3136699A1 (en) Method and device for connecting external equipment
US10318069B2 (en) Method for controlling state of touch screen, and electronic device and medium for implementing the same
CN112188074B (en) Image processing method and device, electronic equipment and readable storage medium
CN111984347A (en) Interaction processing method, device, equipment and storage medium
EP3211879A1 (en) Method and device for automatically capturing photograph, electronic device
CN105956513B (en) Method and device for executing reaction action
CN107239758B (en) Method and device for positioning key points of human face
US20200106936A1 (en) Full screen terminal, operation control method, and device based on full screen terminal
CN107948876B (en) Method, device and medium for controlling sound box equipment
CN113315904B (en) Shooting method, shooting device and storage medium
CN114339019B (en) Focusing method, focusing device and storage medium
CN107122356B (en) Method and device for displaying face value and electronic equipment
EP3905660A1 (en) Method and device for shooting image, and storage medium
US9723218B2 (en) Method and device for shooting a picture
CN108769513B (en) Camera photographing method and device
CN107329604B (en) Mobile terminal control method and device
CN111756985A (en) Image shooting method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant