CN113301251A - Auxiliary shooting method, mobile terminal and computer-readable storage medium - Google Patents

Auxiliary shooting method, mobile terminal and computer-readable storage medium Download PDF

Info

Publication number
CN113301251A
CN113301251A CN202110551439.3A CN202110551439A CN113301251A CN 113301251 A CN113301251 A CN 113301251A CN 202110551439 A CN202110551439 A CN 202110551439A CN 113301251 A CN113301251 A CN 113301251A
Authority
CN
China
Prior art keywords
shooting
preview interface
composition
face image
shooting preview
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110551439.3A
Other languages
Chinese (zh)
Other versions
CN113301251B (en
Inventor
徐爱辉
余航
崔小辉
王汇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN202110551439.3A priority Critical patent/CN113301251B/en
Publication of CN113301251A publication Critical patent/CN113301251A/en
Application granted granted Critical
Publication of CN113301251B publication Critical patent/CN113301251B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters

Abstract

The invention discloses an auxiliary shooting method, a mobile terminal and a computer readable storage medium, wherein the method comprises the following steps: when a human body exists in the shooting preview interface, entering a portrait shooting mode; determining the distance between the terminal and the human body; when the distance is a middle distance, determining a composition type according to the position of a key point in a first frame of face image acquired in a face image shooting mode in a shooting preview interface; displaying a first identifier corresponding to the composition type in the shooting preview interface; displaying a second identifier in a shooting preview interface according to the position of a key point in each frame of face image acquired in a face shooting mode in the shooting preview interface; and when the contact ratio of the first identification and the second identification is greater than the preset contact ratio, shooting. By the method and the device, the composition of the picture of the user is assisted during shooting, and the shooting effect is improved.

Description

Auxiliary shooting method, mobile terminal and computer-readable storage medium
Technical Field
The present invention relates to the field of shooting technologies, and in particular, to an auxiliary shooting method, a mobile terminal, and a computer-readable storage medium.
Background
With the development of terminal equipment, most of the terminal equipment is provided with a camera, and shooting through the terminal equipment becomes a common function.
Whereas for photography, composition is an important factor in the presentation of the content of a work, it is the process of determining and organizing elements to produce a harmony photo. However, the general user does not understand the composition skill, resulting in poor final photographing effect.
Disclosure of Invention
In order to solve the above technical problem, the present invention provides an auxiliary shooting method, a mobile terminal and a computer-readable storage medium.
In order to achieve the above object, the present invention provides an auxiliary photographing method, including:
when a human body exists in the shooting preview interface, entering a portrait shooting mode;
determining the distance between the terminal and the human body;
when the distance is a middle distance, determining a composition type according to the position of a key point in a first frame of face image acquired in a face image shooting mode in a shooting preview interface;
displaying a first identifier corresponding to the composition type in the shooting preview interface;
displaying a second identifier in a shooting preview interface according to the position of a key point in each frame of face image acquired in a face shooting mode in the shooting preview interface;
and when the contact ratio of the first identification and the second identification is greater than the preset contact ratio, shooting.
Optionally, the step of determining the composition type according to the position of the key point in the first frame of face image acquired in the face image shooting mode in the shooting preview interface includes:
acquiring the coordinate X of the left eye in the first frame of face image acquired in the face image shooting mode on the X axis in the shooting preview interfaceLCoordinate X of right eye on X-axis in shooting preview interfaceRAnd the coordinate X of the nose on the X axis in the shooting preview interfaceNWherein the X axis is parallel to the bottom side of the shooting preview interfaceRow, and direction to the right;
when X is presentN<XL+L1(XR-XL) When the composition type is determined to be the left trisection composition, wherein L1Is a first preset value.
Optionally, the coordinate X of the left eye in the first frame of face image acquired in the face image capturing mode on the X axis in the capturing preview interface is acquiredLCoordinate X of right eye on X-axis in shooting preview interfaceRAnd the coordinate X of the nose on the X axis in the shooting preview interfaceNAfter the step (2), further comprising:
when X is presentN<XL+L1(XR-XL) And XN<L2 2(XR-XL) When determining the composition type as the center composition, wherein L1Is a first preset value, L2Is the second preset value.
Optionally, the coordinate X of the left eye in the first frame of face image acquired in the face image capturing mode on the X axis in the capturing preview interface is acquiredLCoordinate X of right eye on X-axis in shooting preview interfaceRAnd the coordinate X of the nose on the X axis in the shooting preview interfaceNAfter the step (2), further comprising:
when X is presentN>L2 2(XR-XL) When the composition type is determined to be the right third composition, wherein L2Is the second preset value.
Optionally, after the step of determining the composition type according to the position of the key point in the first frame of face image acquired in the face image shooting mode in the shooting preview interface, the method further includes:
when P frames of face images are continuously acquired in a portrait shooting mode, new composition types determined according to the positions of key points in the P frames of face images in a shooting preview interface are the same, and the new composition types are different from the composition types, replacing the composition types with the new composition types, and displaying a first identification corresponding to the new composition types in the shooting preview interface.
Optionally, the auxiliary shooting method further includes:
after a first frame of face image is acquired in a face shooting mode, when a new face image is acquired in the face shooting mode, acquiring a coordinate X of a left eye in the new face image on an X axis in a shooting preview interfaceLCoordinate X of right eye on X-axis in shooting preview interfaceRAnd the coordinate X of the nose on the X axis in the shooting preview interfaceN
When the composition type is left trisection composition, when XN<XL+L3(XR-XL) When X is greater than X, the new composition type is determined as the left trisection compositionN<XL+L3(XR-XL) And XN<L4 2(XR-XL) When a new composition type is determined as a center composition, when X isN>L4 2(XR-XL) When the new composition type is determined to be the right trisection composition, wherein L3At a third preset value, L4Is a fourth preset value;
when the composition type is a center composition, when XN<XL+L5(XR-XL) When X is greater than X, the new composition type is determined as the left trisection compositionN<XL+L5(XR-XL) And XN<L6 2(XR-XL) When a new composition type is determined as a center composition, when X isN>L6 2(XR-XL) When the new composition type is determined to be the right trisection composition, wherein L5Is a fifth preset value, L6Is a sixth preset value;
when the composition type is a right-third composition, when XN<XL+L7(XR-XL) When X is greater than X, the new composition type is determined as the left trisection compositionN<XL+L7(XR-XL) And XN<L8 2(XR-XL) Then, a new construct is determinedThe graph type is a center graph, when XN>L8 2(XR-XL) When the new composition type is determined to be the right trisection composition, wherein L7Is a seventh preset value, L8Is the eighth preset value.
Optionally, the step of displaying the second identifier in the shooting preview interface according to the position of the key point in each frame of face image acquired in the face shooting mode in the shooting preview interface includes:
coordinates (X) of a left eye in a shooting preview interface in each frame of face image obtained in a face shooting mode are obtainedL,YL) And coordinates (X) of the right eye in the shooting preview interfaceR,YR) The X axis is parallel to the bottom edge of the shooting preview interface and the direction is right, and the Y axis is vertical to the X axis and the direction is upward;
obtaining an X coordinate corresponding to each frame of face image through an X coordinate determination formula, obtaining a Y coordinate corresponding to each frame of face image through a Y coordinate determination formula, and displaying a second identifier at a (X, Y) position corresponding to any frame of face image on the shooting preview interface when any frame of face image obtained in a face shooting mode is displayed on the shooting preview interface, wherein the X coordinate determination formula is as follows:
Figure BDA0003075593750000031
the Y coordinate determination formula is: y ═ YR+C1(YR-YL)+C2H,C1At a ninth preset value, C2And H is the height of the shooting preview interface.
Optionally, after the step of determining the distance between the terminal and the human body, the method further includes:
when the distance is a short distance, determining the composition type as a center composition.
In addition, to achieve the above object, the present invention also provides a mobile terminal, including: the auxiliary shooting method comprises a memory, a processor and an auxiliary shooting program stored on the memory and capable of running on the processor, wherein the auxiliary shooting program realizes the steps of the auxiliary shooting method when being executed by the processor.
Further, to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon an auxiliary photographing program which, when executed by a processor, implements the steps of the auxiliary photographing method as described above.
In the invention, when a human body exists in a shooting preview interface, a portrait shooting mode is entered; determining the distance between the terminal and the human body; when the distance is a middle distance, determining a composition type according to the position of a key point in a first frame of face image acquired in a face image shooting mode in a shooting preview interface; displaying a first identifier corresponding to the composition type in the shooting preview interface; displaying a second identifier in a shooting preview interface according to the position of a key point in each frame of face image acquired in a face shooting mode in the shooting preview interface; and when the contact ratio of the first identification and the second identification is greater than the preset contact ratio, shooting. By the method and the device, the composition of the picture of the user is assisted during shooting, and the shooting effect is improved.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention;
fig. 2 is a diagram of a communication network system architecture according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating an auxiliary photographing method according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a first frame of a face image displayed on a preview screen;
fig. 5 is a schematic diagram illustrating that a certain frame of face image is displayed on a shooting preview interface in an embodiment.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
In the embodiment of the invention, the auxiliary shooting method is applied to the mobile terminal, and the terminal can be implemented in various forms. For example, the mobile terminal related to the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palm top computer, a Personal Digital Assistant (PDA), and the like.
While a tablet computer will be described in the following description as an example, those skilled in the art will appreciate that the configuration according to the embodiment of the present invention can be applied to other types of mobile terminals, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, fig. 1 is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, where the terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), and TDD-LTE (Time Division duplex-Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the terminal posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer, tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured at the terminal, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the home page display terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
As shown in fig. 1, the memory 109, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an auxiliary photographing program, and the processor 110 may be configured to call the auxiliary photographing program stored in the memory 109 and perform the steps of various embodiments of the method of the present invention.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, the present invention provides various embodiments of the method.
In an embodiment, referring to fig. 3, fig. 3 is a flowchart illustrating an auxiliary photographing method according to an embodiment of the present invention. As shown in fig. 3, the auxiliary photographing method includes:
step S10, when there is human body in the shooting preview interface, entering into a human image shooting mode;
in this embodiment, after the camera is started, the camera collects and displays an image in the shooting preview interface, and when a human body exists in the image displayed in the shooting preview interface, a human image shooting mode is entered. Whether the image contains the human body can be judged by using a deep learning technology, and whether the image contains the human body is judged by using the deep learning technology in the prior art, which is not described herein again.
Step S20, determining the distance between the terminal and the human body;
in this embodiment, after entering the portrait shooting mode, the distance between the terminal and the human body is determined. The distance between the terminal and the human body can be determined through an infrared distance measuring sensor on the terminal.
Step S30, when the distance is a middle distance, determining a composition type according to the position of a key point in a first frame of face image acquired in a face image shooting mode in a shooting preview interface;
in this embodiment, the distance may be divided according to actual needs. For example, a short distance is less than 2 meters, a medium distance is 2 meters or more and 5 meters or less, and a long distance is 5 meters or more. And when the distance between the terminal and the human body is a middle distance, determining the composition type according to the position of a key point in a first frame of human face image acquired in the human image shooting mode in a shooting preview interface. The key points include left eye, right eye, nose, mouth, etc.
Further, in an embodiment, the step of determining the composition type according to the position of the key point in the first frame of face image acquired in the face image capturing mode in the capturing preview interface includes:
step S301, obtaining the coordinate X of the left eye in the first frame face image obtained in the face image shooting mode on the X axis in the shooting preview interfaceLCoordinate X of right eye on X-axis in shooting preview interfaceRAnd the coordinate X of the nose on the X axis in the shooting preview interfaceNThe X axis is parallel to the bottom edge of the shooting preview interface, and the direction of the X axis is right;
in this embodiment, referring to fig. 4, fig. 4 is a schematic diagram illustrating that a first frame of face image is displayed on a shooting preview interface in an embodiment. As shown in fig. 4, the X-axis is parallel to the bottom side of the shooting preview interface and is oriented to the right, and the origin of the coordinate system is the lower left vertex of the shooting preview interface. It should be noted that the coordinate system may not be displayed in the shooting preview interface, and the origin of the coordinate system may also select other points, such as the top left vertex of the shooting preview interface or the center point of the shooting preview interface. Based on the coordinate system shown in fig. 4, the coordinate X of the left eye in the first frame of face image on the X-axis in the shooting preview interface can be obtainedLCoordinate X of right eye on X-axis in shooting preview interfaceRAnd the coordinate X of the nose on the X axis in the shooting preview interfaceN
Step S302, when X is reachedN<XL+L1(XR-XL) When the composition type is determined to be the left trisection composition, wherein L1Is a first preset value.
The true bookIn the examples, when XN<XL+L1(XR-XL) Then, determining the composition type as a left trisection composition, wherein L1The first preset value may be 0.4.
Further, in an embodiment, after step S301, the method further includes:
when X is presentN<XL+L1(XR-XL) And XN<L2 2(XR-XL) When determining the composition type as the center composition, wherein L1Is a first preset value, L2Is the second preset value.
In this example, when X isN<XL+L1(XR-XL) And XN<L2 2(XR-XL) Then, determining the composition type as a center composition, wherein L1The first preset value may be 0.4, L2The specific value may be 0.6 for the second preset value.
Further, in an embodiment, after step S301, the method further includes:
when X is presentN>L2 2(XR-XL) When the composition type is determined to be the right third composition, wherein L2Is the second preset value.
In this example, when X isN>L2 2(XR-XL) Then, the composition type is determined as the right third composition, wherein L2The specific value may be 0.6 for the second preset value.
Step S40, displaying a first identifier corresponding to the composition type in the shooting preview interface;
in this embodiment, the shooting preview interface is trisected through the two vertical lines, and when the composition type is a left trisection composition, the first identifier is displayed at the center point of the right vertical line. When the composition type is a right trisection composition, a first mark is displayed at a center point position of the left vertical line. And when the composition type is a center composition, displaying a first identifier at the center position of the shooting preview interface. The first mark may be in the shape of a circle, a triangle, or the like, and may also have a color. The shape and color of the first mark are set according to actual needs, and are not limited herein.
Step S50, displaying a second identifier in the shooting preview interface according to the position of the key point in each frame of face image acquired in the portrait shooting mode in the shooting preview interface;
in this embodiment, according to the position of the key point in each frame of face image acquired in the face shooting mode in the shooting preview interface, the position corresponding to each frame of face image can be determined, so that when each frame of face image is displayed in the shooting preview interface, the second identifier is displayed at the position corresponding to each frame of face image. It is easy to understand that when the face moves relative to the camera, the position of the key point in each frame of face image acquired in the portrait shooting mode in the shooting preview interface changes, so that the position of the second identifier in the shooting preview interface changes. Wherein the second indicia may be sized and shaped to correspond with the first indicia but in a different color.
Further, in one embodiment, step S50 includes:
step S501, coordinates (X) of the left eye in the shooting preview interface in each frame of face image obtained in the face shooting mode are obtainedL,YL) And coordinates (X) of the right eye in the shooting preview interfaceR,YR) The X axis is parallel to the bottom edge of the shooting preview interface and the direction is right, and the Y axis is vertical to the X axis and the direction is upward;
in this embodiment, coordinates (X) of the left eye in the shooting preview interface in each frame of face image obtained in the face shooting mode are obtainedL,YL) And coordinates (X) of the right eye in the shooting preview interfaceR,YR) The process is similar to the embodiment of step S301, and is not described herein again.
Step S502, obtaining the X coordinate corresponding to each frame of face image through the X coordinate determination formula, and obtaining each frame of face image through the Y coordinate determination formulaDisplaying a second identifier at a position (X, Y) corresponding to any frame of face image on the shooting preview interface when any frame of face image acquired in a face image shooting mode is displayed on the shooting preview interface, wherein an X coordinate determination formula is as follows:
Figure BDA0003075593750000111
the Y coordinate determination formula is: y ═ YR+C1(YR-YL)+C2H,C1At a ninth preset value, C2And H is the height of the shooting preview interface.
In this embodiment, for a frame of face image, coordinates (X) of a left eye in a shooting preview interface in the frame of face image are obtainedL,YL) And coordinates (X) of the right eye in the shooting preview interfaceR,YR) Then, the formula is determined by the X coordinate
Figure BDA0003075593750000121
Y coordinate determination formula Y ═ YR+C1(YR-YL)+C2And H, obtaining the corresponding position (X, Y) of the frame of face image on the shooting preview interface, so that when the frame of face image is displayed on the shooting preview interface, the second identifier is displayed at the position (X, Y). Wherein, C1Can be 0.25, C2The value of (A) may be 0.3, which is merely illustrative and is not considered to be C1、C2The limit of (2).
Referring to fig. 5, fig. 5 is a schematic diagram illustrating that a certain frame of face image is displayed on a shooting preview interface in an embodiment. Fig. 5 shows an example of a center composition, where a first identifier is displayed at a center point of the shooting preview interface, and a second identifier is displayed at a corresponding (X, Y) position of the frame of face image. Wherein, in order to make the user know the difference between the first mark and the second mark more clearly, the first mark and the second mark can be colored in different colors.
And step S60, when the coincidence degree of the first mark and the second mark is larger than the preset coincidence degree, shooting is carried out.
In this embodiment, through the relative movement between the human face and the camera, the first identifier and the second identifier can be gradually overlapped, when the contact degree of the first identifier and the second identifier is greater than the preset contact degree, it is proved that the picture composition is currently completed, and then the picture is directly shot, or the user is prompted to trigger the shooting function, so that the shot works obtained by shooting in the picture composition type are obtained. Wherein the preset contact ratio is set according to actual needs, for example, set to 80%.
In the embodiment, when a human body exists in the shooting preview interface, a portrait shooting mode is entered; determining the distance between the terminal and the human body; when the distance is a middle distance, determining a composition type according to the position of a key point in a first frame of face image acquired in a face image shooting mode in a shooting preview interface; displaying a first identifier corresponding to the composition type in the shooting preview interface; displaying a second identifier in a shooting preview interface according to the position of a key point in each frame of face image acquired in a face shooting mode in the shooting preview interface; and when the contact ratio of the first identification and the second identification is greater than the preset contact ratio, shooting. Through the embodiment, the picture composition of the user is assisted during shooting, and the shooting effect is improved.
Further, in an embodiment, after step S30, the method further includes:
when P frames of face images are continuously acquired in a portrait shooting mode, new composition types determined according to the positions of key points in the P frames of face images in a shooting preview interface are the same, and the new composition types are different from the composition types, replacing the composition types with the new composition types, and displaying a first identification corresponding to the new composition types in the shooting preview interface.
In this embodiment, after step S30, new face images are continuously acquired in the portrait photographing mode, and a new composition type is determined according to the positions of the key points in each new face image in the photographing preview interface. When P frames of face images are continuously acquired in a portrait shooting mode, new composition types determined according to the positions of key points in each frame of face images in the P frames of face images in a shooting preview interface are the same, and the new composition types are different from the composition types, replacing the composition types with the new composition types, and displaying a first identification corresponding to the new composition types in the shooting preview interface. For example, if the composition type is determined to be a left-third composition according to the position of a key point in a first frame of face image acquired in the face image shooting mode in the shooting preview interface, a first identifier is originally displayed at the center point of a right-side vertical line, and if the new composition type is a center composition, the first identifier is currently displayed at the center point in the shooting preview interface. Wherein P is set according to actual needs, for example, to 50.
Further, in an embodiment, the auxiliary shooting method further includes:
after a first frame of face image is acquired in a face shooting mode, when a new face image is acquired in the face shooting mode, acquiring a coordinate X of a left eye in the new face image on an X axis in a shooting preview interfaceLCoordinate X of right eye on X-axis in shooting preview interfaceRAnd the coordinate X of the nose on the X axis in the shooting preview interfaceN
In this embodiment, after the first frame of face image is obtained in the face shooting mode, a new face image is continuously obtained in the face shooting mode. When a new face image is acquired in a portrait shooting mode, acquiring the coordinate X of the left eye in the new face image on the X axis in a shooting preview interfaceLCoordinate X of right eye on X-axis in shooting preview interfaceRAnd the coordinate X of the nose on the X axis in the shooting preview interfaceN. This process is similar to the embodiment of step S301, and is not described herein.
When the composition type is left trisection composition, when XN<XL+L3(XR-XL) When X is greater than X, the new composition type is determined as the left trisection compositionN<XL+L3(XR-XL) And XN<L4 2(XR-XL) When it is determinedThe new composition type is a center composition when XN>L4 2(XR-XL) When the new composition type is determined to be the right trisection composition, wherein L3At a third preset value, L4Is a fourth preset value;
when the composition type is a center composition, when XN<XL+L5(XR-XL) When X is greater than X, the new composition type is determined as the left trisection compositionN<XL+L5(XR-XL) And XN<L6 2(XR-XL) When a new composition type is determined as a center composition, when X isN>L6 2(XR-XL) When the new composition type is determined to be the right trisection composition, wherein L5Is a fifth preset value, L6Is a sixth preset value;
when the composition type is a right-third composition, when XN<XL+L7(XR-XL) When X is greater than X, the new composition type is determined as the left trisection compositionN<XL+L7(XR-XL) And XN<L8 2(XR-XL) When a new composition type is determined as a center composition, when X isN>L8 2(XR-XL) When the new composition type is determined to be the right trisection composition, wherein L7Is a seventh preset value, L8Is the eighth preset value.
In this embodiment, when the composition type is determined to be the left-third composition according to the position of the key point in the first frame of face image acquired in the portrait shooting mode in the shooting preview interface, if X is the caseN<XL+L3(XR-XL) When the composition type is determined to be the left trisection composition, if X is the same as the composition type, determining the new composition type to be the left trisection compositionN<XL+L3(XR-XL) And XN<L4 2(XR-XL) When the new composition type is determined as the center composition, if X isN>L4 2(XR-XL) When it is sureDefining the new composition type as a right third composition, wherein L3Is a third predetermined value, e.g. 0.45, L4The fourth preset value is, for example, 0.65.
When the composition type is determined as the central composition according to the position of the key point in the first frame of face image acquired in the face image shooting mode in the shooting preview interface, if X is the central compositionN<XL+L5(XR-XL) When the composition type is determined to be the left trisection composition, if X is the same as the composition type, determining the new composition type to be the left trisection compositionN<XL+L5(XR-XL) And XN<L6 2(XR-XL) When the new composition type is determined as the center composition, if X isN>L6 2(XR-XL) When the new composition type is determined to be the right trisection composition, wherein L5Is a fifth predetermined value, e.g. 0.3, L6Is a sixth preset value, for example, 0.7.
When the composition type is determined to be a right-third composition according to the position of a key point in a first frame of face image acquired in a face image shooting mode in a shooting preview interface, if X is the caseN<XL+L7(XR-XL) When the composition type is determined to be the left trisection composition, if X is the same as the composition type, determining the new composition type to be the left trisection compositionN<XL+L7(XR-XL) And XN<L8 2(XR-XL) When the new composition type is determined as the center composition, if X isN>L8 2(XR-XL) When the new composition type is determined to be the right trisection composition, wherein L7Is a seventh preset value, for example, 0.3, L8An eighth preset value, for example, is 0.55.
It should be noted that the values of the preset values are only exemplary, and are not limited thereto.
Further, in an embodiment, after step S20, the method further includes:
when the distance is a short distance, determining the composition type as a center composition.
In this embodiment, when the distance between the terminal and the human body is a short distance, the composition type is determined to be the center composition.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where an auxiliary shooting program is stored, and the auxiliary shooting program, when executed by a processor, implements the steps of the above-described embodiments of the auxiliary shooting method.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An auxiliary shooting method, characterized in that the auxiliary shooting method comprises:
when a human body exists in the shooting preview interface, entering a portrait shooting mode;
determining the distance between the terminal and the human body;
when the distance is a middle distance, determining a composition type according to the position of a key point in a first frame of face image acquired in a face image shooting mode in a shooting preview interface;
displaying a first identifier corresponding to the composition type in the shooting preview interface;
displaying a second identifier in a shooting preview interface according to the position of a key point in each frame of face image acquired in a face shooting mode in the shooting preview interface;
and when the contact ratio of the first identification and the second identification is greater than the preset contact ratio, shooting.
2. An auxiliary shooting method as claimed in claim 1, wherein the step of determining the composition type according to the position of the key point in the first frame of face image acquired in the face shooting mode in the shooting preview interface comprises:
acquiring the coordinate X of the left eye in the first frame of face image acquired in the face image shooting mode on the X axis in the shooting preview interfaceLCoordinate X of right eye on X-axis in shooting preview interfaceRAnd the coordinate X of the nose on the X axis in the shooting preview interfaceNThe X axis is parallel to the bottom edge of the shooting preview interface, and the direction of the X axis is right;
when X is presentN<XL+L1(XR-XL) When the composition type is determined to be the left trisection composition, wherein L1Is a first preset value.
3. Assistance according to claim 2The shooting method is characterized in that the coordinate X of the left eye on the X axis in the shooting preview interface in the first frame of face image obtained in the face shooting mode is obtainedLCoordinate X of right eye on X-axis in shooting preview interfaceRAnd the coordinate X of the nose on the X axis in the shooting preview interfaceNAfter the step (2), further comprising:
when X is presentN<XL+L1(XR-XL) And XN<L2 2(XR-XL) When determining the composition type as the center composition, wherein L1Is a first preset value, L2Is the second preset value.
4. An auxiliary photographing method according to claim 3, wherein the coordinate X on the X-axis in the photographing preview interface of the left eye in the first frame of face image acquired in the person image photographing mode is acquiredLCoordinate X of right eye on X-axis in shooting preview interfaceRAnd the coordinate X of the nose on the X axis in the shooting preview interfaceNAfter the step (2), further comprising:
when X is presentN>L2 2(XR-XL) When the composition type is determined to be the right third composition, wherein L2Is the second preset value.
5. The auxiliary photographing method as claimed in claim 4, wherein after the step of determining the composition type according to the position of the key point in the first frame of face image acquired in the face photographing mode in the photographing preview interface, further comprising:
when P frames of face images are continuously acquired in a portrait shooting mode, new composition types determined according to the positions of key points in the P frames of face images in a shooting preview interface are the same, and the new composition types are different from the composition types, replacing the composition types with the new composition types, and displaying a first identification corresponding to the new composition types in the shooting preview interface.
6. An auxiliary photographing method according to claim 5, wherein the auxiliary photographing method further comprises:
after a first frame of face image is acquired in a face shooting mode, when a new face image is acquired in the face shooting mode, acquiring a coordinate X of a left eye in the new face image on an X axis in a shooting preview interfaceLCoordinate X of right eye on X-axis in shooting preview interfaceRAnd the coordinate X of the nose on the X axis in the shooting preview interfaceN
When the composition type is left trisection composition, when XN<XL+L3(XR-XL) When X is greater than X, the new composition type is determined as the left trisection compositionN<XL+L3(XR-XL) And XN<L4 2(XR-XL) When a new composition type is determined as a center composition, when X isN>L4 2(XR-XL) When the new composition type is determined to be the right trisection composition, wherein L3At a third preset value, L4Is a fourth preset value;
when the composition type is a center composition, when XN<XL+L5(XR-XL) When X is greater than X, the new composition type is determined as the left trisection compositionN<XL+L5(XR-XL) And XN<L6 2(XR-XL) When a new composition type is determined as a center composition, when X isN>L6 2(XR-XL) When the new composition type is determined to be the right trisection composition, wherein L5Is a fifth preset value, L6Is a sixth preset value;
when the composition type is a right-third composition, when XN<XL+L7(XR-XL) When X is greater than X, the new composition type is determined as the left trisection compositionN<XL+L7(XR-XL) And XN<L8 2(XR-XL) When a new composition type is determined as a center composition, when X isN>L8 2(XR-XL) When the new composition type is determined to be the right trisection composition, wherein L7Is a seventh preset value, L8Is the eighth preset value.
7. An auxiliary shooting method as claimed in claim 1, wherein the step of displaying the second identifier in the shooting preview interface according to the position of the key point in each frame of face image acquired in the face shooting mode in the shooting preview interface comprises:
coordinates (X) of a left eye in a shooting preview interface in each frame of face image obtained in a face shooting mode are obtainedL,YL) And coordinates (X) of the right eye in the shooting preview interfaceR,YR) The X axis is parallel to the bottom edge of the shooting preview interface and the direction is right, and the Y axis is vertical to the X axis and the direction is upward;
obtaining an X coordinate corresponding to each frame of face image through an X coordinate determination formula, obtaining a Y coordinate corresponding to each frame of face image through a Y coordinate determination formula, and displaying a second identifier at a (X, Y) position corresponding to any frame of face image on the shooting preview interface when any frame of face image obtained in a face shooting mode is displayed on the shooting preview interface, wherein the X coordinate determination formula is as follows:
Figure FDA0003075593740000031
the Y coordinate determination formula is: y ═ YR+C1(YR-YL)+C2H,C1At a ninth preset value, C2And H is the height of the shooting preview interface.
8. An auxiliary shooting method according to any one of claims 1 to 7, further comprising, after the step of determining the distance of the terminal from the human body:
when the distance is a short distance, determining the composition type as a center composition.
9. A mobile terminal, characterized in that the mobile terminal comprises: memory, a processor and an auxiliary shooting program stored on the memory and executable on the processor, the auxiliary shooting program when executed by the processor implementing the steps of the auxiliary shooting method as claimed in any one of claims 1 to 8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon an auxiliary shooting program, which when executed by a processor, implements the steps of the auxiliary shooting method according to any one of claims 1 to 8.
CN202110551439.3A 2021-05-20 2021-05-20 Auxiliary shooting method, mobile terminal and computer readable storage medium Active CN113301251B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110551439.3A CN113301251B (en) 2021-05-20 2021-05-20 Auxiliary shooting method, mobile terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110551439.3A CN113301251B (en) 2021-05-20 2021-05-20 Auxiliary shooting method, mobile terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113301251A true CN113301251A (en) 2021-08-24
CN113301251B CN113301251B (en) 2023-10-20

Family

ID=77323040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110551439.3A Active CN113301251B (en) 2021-05-20 2021-05-20 Auxiliary shooting method, mobile terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113301251B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278060A (en) * 2022-07-01 2022-11-01 北京五八信息技术有限公司 Data processing method and device, electronic equipment and storage medium
CN117135441A (en) * 2023-02-23 2023-11-28 荣耀终端有限公司 Image snapshot method and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009212802A (en) * 2008-03-04 2009-09-17 Fujifilm Corp Imaging apparatus with composition assisting function, and composition assisting method of the same imaging apparatus
CN105046246A (en) * 2015-08-31 2015-11-11 广州市幸福网络技术有限公司 Identification photo camera capable of performing human image posture photography prompting and human image posture detection method
CN106605403A (en) * 2014-08-29 2017-04-26 三星电子株式会社 Photographing method and electronic device
CN108848313A (en) * 2018-08-10 2018-11-20 维沃移动通信有限公司 A kind of more people's photographic methods, terminal and storage medium
CN111343382A (en) * 2020-03-09 2020-06-26 Oppo广东移动通信有限公司 Photographing method and device, electronic equipment and storage medium
CN111770277A (en) * 2020-07-31 2020-10-13 RealMe重庆移动通信有限公司 Auxiliary shooting method, terminal and storage medium
CN112712470A (en) * 2019-10-25 2021-04-27 华为技术有限公司 Image enhancement method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009212802A (en) * 2008-03-04 2009-09-17 Fujifilm Corp Imaging apparatus with composition assisting function, and composition assisting method of the same imaging apparatus
CN106605403A (en) * 2014-08-29 2017-04-26 三星电子株式会社 Photographing method and electronic device
CN105046246A (en) * 2015-08-31 2015-11-11 广州市幸福网络技术有限公司 Identification photo camera capable of performing human image posture photography prompting and human image posture detection method
CN108848313A (en) * 2018-08-10 2018-11-20 维沃移动通信有限公司 A kind of more people's photographic methods, terminal and storage medium
CN112712470A (en) * 2019-10-25 2021-04-27 华为技术有限公司 Image enhancement method and device
CN111343382A (en) * 2020-03-09 2020-06-26 Oppo广东移动通信有限公司 Photographing method and device, electronic equipment and storage medium
CN111770277A (en) * 2020-07-31 2020-10-13 RealMe重庆移动通信有限公司 Auxiliary shooting method, terminal and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278060A (en) * 2022-07-01 2022-11-01 北京五八信息技术有限公司 Data processing method and device, electronic equipment and storage medium
CN115278060B (en) * 2022-07-01 2024-04-09 北京五八信息技术有限公司 Data processing method and device, electronic equipment and storage medium
CN117135441A (en) * 2023-02-23 2023-11-28 荣耀终端有限公司 Image snapshot method and electronic equipment

Also Published As

Publication number Publication date
CN113301251B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN108989563B (en) Double-sided screen display method, mobile terminal and computer readable storage medium
CN109471579B (en) Terminal screen information layout adjustment method and device, mobile terminal and storage medium
CN109144441B (en) Screen adjusting method, terminal and computer readable storage medium
CN113301251B (en) Auxiliary shooting method, mobile terminal and computer readable storage medium
CN109739414B (en) Picture processing method, mobile terminal and computer readable storage medium
CN109491577B (en) Holding interaction method and device and computer readable storage medium
CN108762709B (en) Terminal control method, terminal and computer readable storage medium
CN108848321B (en) Exposure optimization method, device and computer-readable storage medium
CN109146463B (en) Mobile payment method, mobile terminal and computer readable storage medium
CN108279822B (en) Display method of camera application in flexible screen and mobile terminal
CN107340958B (en) Horizontal and vertical screen switching method and mobile terminal
CN112866685A (en) Screen projection delay measuring method, mobile terminal and computer readable storage medium
CN109561221B (en) Call control method, device and computer readable storage medium
CN107728789B (en) Starting method of one-hand operation mode, terminal and storage medium
CN107566607B (en) Notification display method, mobile terminal and computer readable storage medium
CN107256108B (en) Screen splitting method, device and computer readable storage medium
CN112947831B (en) Screen-throwing reverse control method, mobile terminal and computer readable storage medium
CN111866388B (en) Multiple exposure shooting method, equipment and computer readable storage medium
CN109413272B (en) Gravity sensor management method, double-sided screen mobile terminal and storage medium
CN109215004B (en) Image synthesis method, mobile terminal and computer readable storage medium
CN108600629B (en) Photographing method, mobile terminal and computer-readable storage medium
CN112135047A (en) Image processing method, mobile terminal and computer storage medium
CN110058761B (en) Copy selection method, mobile terminal and computer readable storage medium
CN113171614A (en) Auxiliary control method and device for game carrier and computer readable storage medium
CN109375789B (en) Gravity sensor multiplexing method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant