CN111107271B - Shooting method and electronic equipment - Google Patents

Shooting method and electronic equipment Download PDF

Info

Publication number
CN111107271B
CN111107271B CN201911422792.0A CN201911422792A CN111107271B CN 111107271 B CN111107271 B CN 111107271B CN 201911422792 A CN201911422792 A CN 201911422792A CN 111107271 B CN111107271 B CN 111107271B
Authority
CN
China
Prior art keywords
electronic device
camera
preview image
input
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911422792.0A
Other languages
Chinese (zh)
Other versions
CN111107271A (en
Inventor
贾添添
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911422792.0A priority Critical patent/CN111107271B/en
Publication of CN111107271A publication Critical patent/CN111107271A/en
Application granted granted Critical
Publication of CN111107271B publication Critical patent/CN111107271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Abstract

The embodiment of the invention provides a shooting method and electronic equipment, relates to the technical field of communication, and can solve the problems that a user looks up or shoots an image in a complicated and complex process and the man-machine interaction performance is poor. The method comprises the following steps: receiving a first input of a user through a second camera of the electronic equipment under the condition that a preview image acquired by a first camera of the electronic equipment is displayed on a shooting preview interface; in response to a first input, the preview image is updated or saved. The scheme is applied to shooting scenes.

Description

Shooting method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a shooting method and electronic equipment.
Background
Currently, when a user triggers the electronic device to capture a picture, the electronic device may keep displaying an image of a certain live-action area in a capture preview interface of a camera application when the user fixes the electronic device. In this case, if the user needs to view or shoot an image of a real scene (hereinafter referred to as another real scene) around the real scene area, the user may trigger the electronic device to re-capture an image of another real scene through the mobile electronic device and display the image of the other real scene, and then the user may trigger the electronic device to shoot an image of the other real scene again, so that an image of the other real scene may be obtained.
However, in the above process, since the user needs to move the electronic device to view or shoot other live-action images, the process of viewing or shooting images is complicated and complicated, and the human-computer interaction performance is poor.
Disclosure of Invention
The embodiment of the invention provides a shooting method and electronic equipment, and aims to solve the problems that a process of checking or shooting an image by a user is complicated and complex, and the human-computer interaction performance is poor.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present invention provides a shooting method, where the method is applied to an electronic device, and the method includes: receiving a first input of a user through a second camera of the electronic equipment under the condition that a preview image acquired by a first camera of the electronic equipment is displayed on a shooting preview interface; and updating the preview image or saving the preview image in response to the first input.
In a second aspect, an embodiment of the present invention provides an electronic device, which may include a receiving module and a processing module. The receiving module is used for receiving a first input of a user through a second camera of the electronic equipment under the condition that a preview image acquired by the first camera of the electronic equipment is displayed on a shooting preview interface; and the processing module is used for responding to the first input received by the receiving module and updating the preview image or saving the preview image.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, and when executed by the processor, the electronic device implements the steps of the shooting method in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the shooting method as in the first aspect described above.
In the embodiment of the present invention, in a case that a preview image captured by a first camera of an electronic device is displayed on a shooting preview interface of the electronic device, if the electronic device detects (receives) a first input to a user through a second camera of the electronic device, the electronic device may update the preview image or save the preview image in response to the first input. According to the scheme, the user can execute an input within the acquisition range of the second camera to trigger the electronic equipment to update or save the preview image in the shooting preview interface, and the user does not need to move the electronic equipment to trigger the electronic equipment to update or save the preview image in the shooting preview interface, so that the convenience of the user in checking or shooting the image can be improved, and the man-machine interaction performance is improved.
Drawings
Fig. 1 is a schematic structural diagram of an android operating system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a shooting method according to an embodiment of the present invention;
fig. 3 is one of schematic interfaces of a shooting method application according to an embodiment of the present invention;
fig. 4 is a second schematic interface diagram of an application of a photographing method according to an embodiment of the present invention;
fig. 5 is a third schematic interface diagram of an application of a shooting method according to an embodiment of the present invention;
fig. 6 is a second schematic diagram of a photographing method according to an embodiment of the invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 8 is a hardware schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
The terms "first" and "second," etc. herein are used to distinguish between different objects and are not used to describe a particular order of objects. For example, the first input and the second input, etc. are for distinguishing different inputs, rather than for describing a particular order of inputs.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, "a plurality" means two or more unless otherwise specified. For example, a plurality of elements means two or more elements, and the like.
The embodiment of the invention provides a shooting method and electronic equipment. Specifically, in a case that a preview image captured by a first camera of the electronic device is displayed on a shooting preview interface of the electronic device, if the electronic device detects (receives) a first input to a user through a second camera of the electronic device, the electronic device may update or save the preview image in response to the first input. According to the scheme, the user can execute an input within the acquisition range of the second camera to trigger the electronic equipment to directly update or save the preview image in the shooting preview interface, and the user does not need to move the electronic equipment to trigger the electronic equipment to update or save the preview image in the shooting preview interface, so that the convenience of the user in checking or shooting the image can be improved, and the human-computer interaction performance is improved.
The electronic device in the embodiment of the present invention may be an electronic device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment applied to the shooting method provided by the embodiment of the present invention, taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application. For example, a camera application in an embodiment of the present invention may be developed.
In general, a camera application may include two parts, one part being a camera service running inside the electronic device for detecting various inputs and the like of a user on a camera application interface; the other part is content displayed on the screen of the electronic device, such as content displayed in response to various inputs by the user, etc.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the shooting method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the shooting method may run based on the android operating system shown in fig. 1. That is, the processor or the electronic device may implement the shooting method provided by the embodiment of the present invention by running the software program in the android operating system.
The electronic device in the embodiment of the invention can be a mobile electronic device or a non-mobile electronic device. For example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiment of the present invention is not particularly limited.
The executing subject of the shooting method provided by the embodiment of the present invention may be the electronic device, or may also be a functional module and/or a functional entity capable of implementing the shooting method in the electronic device, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited. The following takes an electronic device as an example to exemplarily describe the shooting method provided by the embodiment of the invention.
The shooting method provided by the embodiment of the invention can be applied to various shooting scenes. In a shooting scene, when a user A shoots a user B, the user A can trigger an electronic device to execute the shooting method provided by the embodiment of the invention; in another shooting scene, after a user purchases an article on the internet, the user finds that the article has a problem, and after the user feeds back to a merchant, the merchant requests the user to upload an article picture, and the user can trigger the electronic device to execute the shooting method provided by the embodiment of the invention; in another shooting scene, when a user sees an unfamiliar plant and needs to know the name of the plant by photographing the unknown plant, the user can trigger the electronic device to execute the shooting method provided by the embodiment of the invention.
Based on the above shooting scenes, in the process of triggering the electronic device to shoot by the user, the shooting preview interface of the electronic device may display the preview image, but due to the limitation of the screen size of the electronic device, the image captured by the camera of the electronic device may not be fully displayed in the shooting preview interface of the electronic device, that is, the preview image displayed by the shooting preview interface of the electronic device may be a partial image in the image captured by the camera of the electronic device. When the user needs to view other images except the partial image, the user can trigger the electronic device to execute the shooting method provided by the embodiment of the invention. Specifically, the user may perform an input within a capture range of a second camera (an off-screen camera) of the electronic device, such that the off-screen camera may detect the input. The electronic device can then automatically update the preview image displayed by the capture preview interface or save the preview image based on the input. Therefore, the user can execute an input within the acquisition range of the second camera to trigger the electronic equipment to directly update the preview image in the shooting preview interface or store the preview image, and the electronic equipment can be triggered to update the preview image in the shooting preview interface or store the preview image without moving the electronic equipment by the user, so that the convenience of the user in checking or shooting the image can be improved, and the human-computer interaction performance is improved.
The following describes an embodiment of a shooting method provided by an embodiment of the present invention with reference to various drawings.
As shown in fig. 2, an embodiment of the present invention provides a photographing method, which may include S201-S202 described below.
S201, under the condition that a preview image collected by a first camera is displayed on a shooting preview interface, the electronic equipment receives first input of a user through a second camera.
The first input may be an input detected by the electronic device through the second camera, that is, the electronic device may receive (detect) the first input of the user through the second camera.
Optionally, in an embodiment of the present invention, the first input may be an input of an operation body in an air space above a screen of the electronic device. Specifically, the first input may be a movement input of the operation body along different directions above the screen; alternatively, the first input may be an input that the operation body stays at a preset position above the screen for a preset time. Wherein, the preset position can be in the acquisition range of the second camera.
Optionally, in the embodiment of the present invention, the operation body may be a finger of a user, or may be a touch pen. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
Optionally, in this embodiment of the present invention, the movement input may be a movement input along any direction, for example, a movement along an "up" direction, a movement along a "down" direction, a movement along a "left" direction, a movement along a "right" direction, or the like, which may be determined according to actual usage requirements, and this embodiment of the present invention is not limited.
It should be noted that, directions such as "up", "down", "left", "right", and the like in the embodiment of the present invention are described by taking as an example an input of an operation body above a screen of an electronic device when the screen faces a user. That is, the directions "up", "down", "left" and "right" in the embodiment of the present invention are relative to the user when the screen of the electronic device faces the user. It is understood that the directions "up", "down", "left" and "right" are exemplary and not intended to limit the embodiments of the present invention. In actual implementation, any other possible directions may also be used, which may be determined according to actual use requirements, and the embodiment of the present invention is not limited.
Optionally, in the embodiment of the present invention, the first camera may be a front camera, or may also be a rear camera. The method can be determined according to the use requirement, and the embodiment of the invention is not limited.
Optionally, in the embodiment of the present invention, the second camera may be located below a screen of the electronic device, that is, the second camera may be a screen camera in the electronic device.
Optionally, in the embodiment of the present invention, when the electronic device displays the preview image on the shooting preview interface, the electronic device may further display a focusing identifier on the shooting preview interface. Specifically, after the electronic device runs the camera application, the electronic device may automatically control the first camera to focus on a certain physical object. After the first camera of the electronic device is focused on the physical object, the electronic device may display a focusing identifier (e.g., identifier 31 in fig. 3), such as a focusing frame, on the displayed image of the physical object for indicating to the user the physical object to which the electronic device is currently focused.
It is understood that the above focusing identification may be used to indicate a physical object focused by the first camera of the electronic device. I.e. the focus identification may be used to indicate to which physical object the electronic device is currently focused.
Illustratively, as shown in fig. 3, an interface diagram of a shooting preview interface of a camera application is displayed for an electronic device. When the electronic device automatically controls the first camera to focus on the physical object corresponding to the object 30, the electronic device may display the focusing identifier 31 on the display position of the object 30.
S202, the electronic equipment responds to the first input, and the preview image is updated or saved.
In the embodiment of the present invention, after the electronic device receives the first input, the electronic device may update the preview image or save the preview image in response to the first input. Specifically, in a case where the first input is a movement input of the operation body in different directions over the screen, the electronic device may update the preview image in response to the first input. In a case where the first input is an input in which the operation body stays at a preset position above the screen for a preset time, the electronic device may save the preview image in response to the first input.
Optionally, in this embodiment of the present invention, the electronic device may update the preview image according to the moving direction of the first input.
For example, assume that the first input is an input in which the operator moves in the "left" direction above the screen. As shown in (a) of fig. 4, a schematic view of a preview image is displayed for a photographing preview interface of an electronic device. After the electronic device receives the first input, the electronic device may update the preview image, displaying the preview image as shown in (b) of fig. 4. The image shown in 40 is an image captured by a camera of the electronic device but not displayed in a shooting preview interface of the electronic device.
Optionally, in the embodiment of the present invention, the step S202 may be specifically implemented by the step S202a described below.
S202a, when the distance between the operation body and the screen of the electronic device is greater than or equal to the first threshold and less than or equal to the second threshold, the electronic device updates the preview image or saves the preview image.
Wherein the first threshold is smaller than the second threshold.
Optionally, in the embodiment of the present invention, the first threshold and the second threshold may be any possible values, which are determined according to actual usage requirements, and the embodiment of the present invention is not limited.
In the embodiment of the invention, in the process that the operation body gradually approaches to the screen of the electronic device, the second camera of the electronic device can detect the distance between the operation body and the screen of the electronic device in real time, and when the fact that the distance between the operation body and the screen is greater than or equal to the first threshold and less than or equal to the second threshold is detected, the user can trigger the electronic device to update the preview image or store the preview image by executing the first input.
Optionally, in an embodiment of the present invention, in a possible implementation manner, the second camera may determine a distance between the operating body and the screen by monitoring an image size (image size) of the operating body acquired by the second camera. The user may preset, in the electronic device, a correspondence between a plurality of operation body images and a plurality of distances, that is, one operation body image may correspond to one distance. Wherein the size of each of the plurality of operation body images is different. The electronic device may then save the correspondence of the plurality of operator images and the plurality of distances. In this way, after the second camera captures the image of the operation body, the electronic device may find a target operation body image matching the image of the operation body captured by the second camera from among a plurality of preset operation body images, and determine a corresponding distance to the target operation body image according to the target operation body image, and then the electronic device may determine the distance as the distance between the operation body and the screen currently.
In this embodiment of the present invention, matching the image of the operating body acquired by the second camera with the image of the target operating body may be understood as: the image of the operation body acquired by the second camera is the same as the image of the target operation body, or the matching degree of the image of the operation body acquired by the second camera and the image of the target operation body is larger than a threshold value.
In another possible implementation manner, the second camera may determine the distance between the operating body and the screen by emitting infrared rays. Specifically, the second camera emits light beams such as infrared rays to the operating body, and then the operating body can reflect the light rays to the second camera, so that the second camera can determine the reflection time of the light rays, and determine the distance between the operating body and the screen according to the time difference between the moment of emitting the light rays and the moment of receiving the reflected light rays.
Optionally, in this embodiment of the present invention, when the distance between the operation body and the screen of the electronic device is greater than or equal to the first threshold and less than or equal to the second threshold, the electronic device may further control the camera application to enter the focus preview mode.
Optionally, in this embodiment of the present invention, in a case that the electronic device controls the camera application to enter the focus preview mode, after the operation body moves in different directions above the screen, that is, after the electronic device receives the first input, the electronic device may update the display position of the focus identifier in the preview image in response to the first input.
For example, the "electronic device updates the preview image in response to the first input" in S202 described above may be specifically implemented in S203 described below.
S203, the electronic equipment responds to the first input, and the display position of the focusing mark in the preview image is updated.
In the embodiment of the present invention, after the electronic device receives the first input, the electronic device may update the display position of the in-focus mark in the preview image in response to the first input. Specifically, after the electronic device receives the first input, the electronic device may control the focus identifier to move from an initial position of the focus identifier (i.e., a display position of the focus identifier on the screen before the first input is performed) to another position.
Optionally, in the embodiment of the present invention, the step S203 may be specifically implemented by the step S203a described below.
S203a, the electronic device responds to the first input, and the display position of the focusing mark in the preview image is updated from the first display position to the second display position according to the movement displacement of the first input.
Optionally, in this embodiment of the present invention, the moving displacement may include a moving distance and a moving direction.
In the embodiment of the present invention, when the operating body moves in different directions above the screen of the electronic device, in a possible implementation manner, the second camera of the electronic device may acquire a displacement image of the operating body moving above the screen in real time, determine a moving distance and a moving direction of the operating body according to the displacement image, and update a display position of the focusing identifier in the preview image according to the moving distance and the moving direction. In another possible implementation manner, a second camera of the electronic device may acquire, in real time, a displacement image of the operating body moving above the screen, and then the second camera may transmit the displacement image to a processor of the electronic device, after the processor receives the displacement image, the processor may determine a moving distance and a moving direction of the operating body according to the displacement image, and update a display position of the focusing identifier in the preview image according to the moving distance and the moving direction.
In the embodiment of the present invention, the user may preset a correspondence relationship between the moving distances of the plurality of operation bodies (hereinafter, referred to as first distances) and the moving distances of the plurality of focus marks (hereinafter, referred to as second distances) in the electronic device, and then the electronic device may save the correspondence relationship between the first distances and the second distances. In this way, after the electronic device determines the moving distance and the moving direction of the operation body, the electronic device may determine the moving direction of the focusing mark according to the moving direction, then the electronic device may find a first distance having the same size as the moving distance of the operation body from among a plurality of preset first distances, and determine a second distance corresponding to the first distance according to the first distance, and then the electronic device may move the display position of the focusing mark from the initial position (i.e., the first display position) to a position (i.e., the second display position) located at a second distance from the first display position for display, that is, the electronic device may update the display position of the focusing mark in the preview image from the first display position to the second display position.
It is to be understood that, in the above one possible implementation manner, the function of the processor of the electronic device in the above another possible implementation manner may be integrated in the above second camera.
In the embodiment of the invention, as the user can execute an input within the acquisition range of the second camera to trigger the electronic equipment to directly update the preview image in the shooting preview interface or store the preview image, and the electronic equipment can be triggered to update the preview image in the shooting preview interface or store the preview image without moving the electronic equipment by the user, the convenience of the user for checking or shooting the image can be improved, and the man-machine interaction performance is improved.
Optionally, in this embodiment of the present invention, after the electronic device receives the first input, the electronic device may further move the focus of the first camera from one physical object to another physical object in response to the first input, that is, the electronic device may change the physical object focused by the first camera.
For example, after S201, the shooting method provided by the embodiment of the present invention may further include S204 described below.
S204, the electronic equipment responds to the first input, and the focusing point of the first camera is moved from the first entity object to the second entity object.
The first entity object may be an entity object corresponding to the first display position, that is, an entity object corresponding to an object located at the first display position. The second physical object may be a physical object corresponding to the second display position, that is, a physical object corresponding to an object located at the second display position.
In the embodiment of the present invention, after the electronic device receives the first input, the electronic device may change the physical object focused by the first camera in response to the first input, that is, the electronic device may control the first camera to move the focus point from the first physical object to the second physical object.
For example, it is assumed that the entity object currently focused by the electronic device is object a, and when the first distance is 2mm, the corresponding second distance is 0.05 mm. When the operator moves 2mm in the left direction above the screen, the electronic device may control the first camera to move the focus from the object a to the solid object B corresponding to the image located at the left side of the image of the object a and 0.05mm from the image of the object a.
Further, after the electronic device moves the focus point from the first physical object to the second physical object, the electronic device may identify a display position of the focus point in the preview image, updated from the first display position to the second display position.
It should be noted that, the method for the electronic device to change the entity object focused by the first camera may be: the electronic equipment controls the lens of the first camera to move, so that the projection position of the lens of the first camera on an imaging chip of the electronic equipment is changed, and the entity object focused by the first camera is changed.
For example, assuming that the moving direction is "right", when the first distance is 2mm, the corresponding second distance is 0.05 mm. As shown in fig. 3, the first camera focuses on the physical object corresponding to the object 30, and the focused mark is the mark shown as 31. After the operation body moves 2mm in the right direction above the screen of the electronic device, that is, the electronic device receives the first input, as shown in fig. 5, the electronic device may capture a displacement image of the movement of the operation body above the screen through the second camera in response to the first input, and determine that the movement direction of the operation body is "right" and the movement distance is 0.05mm from the displacement image of the movement, and then the electronic device may control the first camera to move the focusing point from the physical object corresponding to the object 30 to the physical object corresponding to the object 50 located on the right side of the object 30 and having the distance of 0.05mm from the object 30, and move the focusing mark 31 to the position shown by 51 for display.
It should be noted that, in the embodiment of the present invention, each of the above S203 and S204 is a result of the electronic device responding to the first input. In practical implementation, after executing S201, the electronic device may execute S204 and then execute S203.
In the embodiment of the invention, after the electronic device automatically focuses on a certain entity object, when the user needs the electronic device to refocus on other entity objects, the user can trigger the electronic device to refocus on other entity objects through the first input. Specifically, the user may trigger the electronic device to control the first camera to focus on another entity object by executing the first input within the acquisition range of the second camera. The electronic equipment can be triggered to focus on other entity objects without clicking images of other entity objects in the preview image like the prior art, so that the convenience of the electronic equipment for focusing on the entity objects can be improved.
Optionally, in the embodiment of the present invention, a position of the second camera below the screen may correspond to a position of a shooting control in the shooting preview interface.
For example, as shown in fig. 3, in the case that the shooting control is the control shown by 32, the second camera may be located at a position below the screen corresponding to the position of the shooting control shown by 32.
Optionally, in the embodiment of the present invention, when the second camera is located below the screen of the electronic device, and the location of the second camera below the screen corresponds to the location of the shooting control in the shooting preview interface, after the electronic device updates the preview image, the user may directly execute an input (for example, a second input described below) to the shooting control to trigger the electronic device to store the preview image, that is, to complete the operation of shooting the image.
For example, in conjunction with fig. 2 described above, as shown in fig. 6, in a case where the position of the second camera below the screen may correspond to the position of the shooting control in the shooting preview interface, after "the electronic device updates the preview image in response to the first input" in S202 described above, the shooting method provided by the embodiment of the present invention may further include S205 and S206 described below.
And S205, the electronic equipment receives a second input of the shooting control by the user.
S206, the electronic equipment responds to the second input and saves the preview image.
In the embodiment of the present invention, after the electronic device receives the second input of the shooting control from the user, the electronic device may save the preview image in response to the second input. Thus, the operation of capturing the image can be completed.
Optionally, in the embodiment of the present invention, the second input may specifically be any possible form of input, such as click input, re-press input, or long-press input of the shooting control by the user. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
In the embodiment of the invention, because the under-screen camera is positioned at the position below the screen corresponding to the position of the shooting control, after the user triggers the electronic equipment to update the preview image through the first input, the user can directly operate the shooting control to trigger the electronic equipment to shoot. Therefore, in the process of triggering the electronic equipment to update the preview image and shoot, the user does not need to move the electronic equipment, and the operation of triggering the electronic equipment to update the preview image and shoot can be finished, so that the convenience of updating the preview image and shooting by the electronic equipment can be improved.
Furthermore, because the camera under the screen is located at the position below the screen corresponding to the position of the shooting control, the user can directly operate the shooting control after the user triggers the electronic device to focus on other entity objects through the first input, so as to trigger the electronic device to shoot. Therefore, in the process of triggering the focusing and shooting of the electronic equipment, the fingers of the user do not need to move too much, and the time from the triggering of focusing to the triggering of shooting can be shortened, so that the shaking probability of the electronic equipment in the process can be reduced, and the focusing accuracy of the electronic equipment is improved.
Furthermore, at the moment before the user triggers the electronic device to shoot, since the user performs an input near the shooting control, the electronic device can be triggered to control the first camera to focus on a certain entity object without moving the finger from the shooting control to the position of the image of the entity object, so that the convenience of the electronic device for focusing on the entity object can be improved.
Optionally, in this embodiment of the present invention, after the user moves the electronic device, the electronic device may control the first camera to refocus the entity object. In this case, if the user needs the electronic device to fix the entity object focused by the first camera, the user may keep the distance between the operation body and the screen of the electronic device within a certain threshold (for example, a first threshold described below), and may trigger the electronic device to fix the focus of the first camera to the certain entity object, that is, the electronic device may fix the entity object focused by the first camera.
For example, after S202, the shooting method provided in the embodiment of the present invention may further include S207 described below.
And S207, under the condition that the distance between the operation body and the screen is smaller than a first threshold value, the electronic equipment fixes the focusing point of the first camera to the third entity object.
In the embodiment of the present invention, in a process that the operating body gradually approaches the screen of the electronic device, when a distance between the operating body and the screen of the electronic device is smaller than a first threshold, the electronic device may control the camera application to enter a fixed focus mode (that is, the camera application of the electronic device switches from the focus preview mode to the fixed focus mode), and fix the focus of the first camera to the third entity object, that is, the entity object focused by the first camera is fixed by the electronic device.
In the embodiment of the present invention, in a case where the electronic device controls the camera application to enter the fixed focus mode, the electronic device may control the first camera to fix the focus point to the third physical object. For example, the electronic device may fix the focus point of the first camera to the third physical object during movement of the electronic device by the user.
In an embodiment of the present invention, the method for fixing the entity object focused by the first camera by the electronic device may be: the electronic device can fix the lens of the first camera, so that the projection position of the lens of the first camera on the imaging chip of the electronic device is fixed, and the entity object focused by the first camera is fixed.
In the embodiment of the invention, under the condition that the distance between the operation body and the screen is smaller than the first threshold, the electronic equipment can control the first camera to enter the fixed focusing mode, so that when a user moves the electronic equipment, the electronic equipment can also fix the entity object focused by the first camera, thereby avoiding the electronic equipment from refocusing other entity objects, further ensuring that the electronic equipment focuses on the entity object required by the user, and improving the image shooting effect.
Optionally, in the embodiment of the present invention, after the electronic device fixes the focus point of the first camera to the third entity object, that is, after the electronic device fixes the entity object focused by the first camera, the display position of the focusing identifier may be fixed. Even if the moving distance of the electronic device is greater than a certain threshold (for example, a third threshold described below), the electronic device may fixedly display the in-focus indicator in the preview image.
For example, after S207, the shooting method provided in the embodiment of the present invention may further include S208 described below.
And S208, under the condition that the distance between the operation body and the screen is smaller than the first threshold and the moving distance of the electronic equipment in the space is larger than or equal to a third threshold, fixedly displaying the focusing mark at a third display position in the preview image by the electronic equipment.
The entity object corresponding to the third display position is a third entity object, that is, the entity object corresponding to the object located at the third display position is the third entity object.
Optionally, in an embodiment of the present invention, the third threshold may be a minimum distance that can trigger the electronic device to refocus the entity object when the camera application enters the non-fixed focusing mode. That is, in the case where the camera application enters the non-fixed focusing mode, the electronic device may refocus the physical object in the case where the moving distance of the electronic device in space is greater than or equal to the third threshold. In the fixed mode, even if the moving distance is greater than or equal to the third threshold, the electronic device may not refocus the physical object, that is, the electronic device may fix the physical object focused by the first camera.
In the embodiment of the present invention, when the distance between the operation body and the screen of the electronic device is smaller than the first threshold, if the moving distance of the electronic device in the space is greater than or equal to a third threshold, the electronic device may fix the focus of the first camera to the third entity object, and fixedly display the in-focus mark at a third display position in the preview image.
In the embodiment of the invention, the electronic equipment can fixedly display the focusing identification in the preview image, so that the user can be prompted that the entity object focused by the first camera at present is not changed, the user can conveniently execute subsequent shooting actions, and the man-machine interaction performance can be improved.
In the embodiment of the present invention, the photographing methods shown in the above-mentioned method drawings are all exemplarily described with reference to one drawing in the embodiment of the present invention. In specific implementation, the shooting methods shown in the above method drawings may also be implemented by combining with any other drawings that may be combined, which are illustrated in the above embodiments, and are not described herein again.
As shown in fig. 7, an embodiment of the present invention provides an electronic device 400, and the electronic device 400 may include a receiving module 401 and a processing module 402. The receiving module 401 may be configured to receive a first input of a user through a second camera of the electronic device when a preview image acquired by a first camera of the electronic device is displayed on a shooting preview interface; the processing module 402 may be configured to update the preview image or save the preview image in response to the first input received by the receiving module 401.
Optionally, in an embodiment of the present invention, the processing module 402 may be specifically configured to update a display position of the focusing identifier in the preview image. The focusing identification can be used for indicating a solid object focused by the first camera.
Optionally, in an embodiment of the present invention, the processing module 402 may be specifically configured to update the display position of the focusing mark in the preview image from the first display position to the second display position according to the movement displacement of the first input.
Optionally, in this embodiment of the present invention, the processing module 402 may be further configured to move the focus of the first camera from the first entity object to the second entity object in response to the first input. The first entity object may be an entity object corresponding to the first display position, and the second entity object may be an entity object corresponding to the second display position.
Optionally, in the embodiment of the present invention, under the condition that the second camera is located below the screen of the electronic device, and the location of the second camera below the screen corresponds to the location of the shooting control in the shooting preview interface, the processing module 402 may be further configured to, after the preview image is updated, respond to a second input of the user to the shooting control, and store the preview image.
Optionally, in an embodiment of the present invention, the processing module 402 may be specifically configured to update the preview image or save the preview image when a distance between the operation body and the screen of the electronic device is greater than or equal to a first threshold and is less than or equal to a second threshold. Wherein the first threshold is less than the second threshold.
Optionally, in this embodiment of the present invention, the processing module 402 may be further configured to fix the focus of the first camera to the third entity object when a distance between the operation body and the screen is smaller than a first threshold.
Optionally, in this embodiment of the present invention, the processing module 402 may be further configured to, after fixing the focus of the first camera to the third entity object, fixedly display the in-focus indicator at a third display position in the preview image when a distance between the operating body and the screen is smaller than a first threshold and a moving distance of the electronic device in the space is greater than or equal to a third threshold. The entity object corresponding to the third display position may be a third entity object.
The electronic device provided by the embodiment of the present invention can implement each process implemented by the electronic device in the above method embodiments, and is not described herein again to avoid repetition.
The embodiment of the invention provides electronic equipment, wherein under the condition that a preview image acquired by a first camera of the electronic equipment is displayed on a shooting preview interface of the electronic equipment, if the electronic equipment detects (receives) a first input of a user through a second camera of the electronic equipment, the electronic equipment can respond to the first input and update the preview image or save the preview image. According to the scheme, the user can execute an input within the acquisition range of the second camera to trigger the electronic equipment to update or save the preview image in the shooting preview interface, and the user does not need to move the electronic equipment to trigger the electronic equipment to update or save the preview image in the shooting preview interface, so that the convenience of the user in checking or shooting the image can be improved, and the man-machine interaction performance is improved.
Fig. 8 is a hardware schematic diagram of an electronic device implementing various embodiments of the invention. As shown in fig. 8, the electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 8 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 110 may be configured to control the user input unit 107 to receive a first input from a user through the second camera in a case that the preview image captured by the first camera is displayed on the shooting preview interface, and update the preview image or save the preview image in response to the first input received by the user input unit 107.
It can be understood that, in the embodiment of the present invention, the receiving module 401 in the structural schematic diagram of the electronic device (for example, fig. 7) may be implemented by the user input unit 107. The processing module 402 in the structural schematic diagram of the electronic device (for example, fig. 7) may be implemented by the processor 110.
The embodiment of the invention provides electronic equipment, wherein under the condition that a preview image acquired by a first camera of the electronic equipment is displayed on a shooting preview interface of the electronic equipment, if the electronic equipment detects (receives) a first input of a user through a second camera of the electronic equipment, the electronic equipment can respond to the first input and update the preview image or save the preview image. According to the scheme, the user can execute an input within the acquisition range of the second camera to trigger the electronic equipment to update or save the preview image in the shooting preview interface, and the user does not need to move the electronic equipment to trigger the electronic equipment to update or save the preview image in the shooting preview interface, so that the convenience of the user in checking or shooting the image can be improved, and the man-machine interaction performance is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 8, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power supply 111 (e.g., a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 100 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements the processes of the foregoing method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
Optionally, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the foregoing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may include a read-only memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (18)

1. A shooting method is applied to an electronic device comprising a first camera and a second camera, and is characterized by comprising the following steps:
receiving a first input of a user through the second camera under the condition that a preview image acquired by the first camera is displayed on a shooting preview interface;
updating the preview image or saving the preview image in response to the first input;
the first input is input of an operation body in the space above a screen of the electronic equipment; the air separating operation is input by the movement of the operation body above the screen along different directions, or input by the stay of the operation body at a preset position above the screen for a preset time; the preset position is located in the acquisition range of the second camera.
2. The method of claim 1, wherein the updating the preview image comprises:
updating a display position of a focusing identifier in the preview image, wherein the focusing identifier is used for indicating the entity object focused by the first camera.
3. The method of claim 2, wherein updating the display position of the in-focus indication in the preview image comprises:
and according to the movement displacement of the first input, the display position of the focusing mark in the preview image is updated from the first display position to the second display position.
4. The method of claim 3, further comprising:
and responding to the first input, and moving the focus of the first camera from a first entity object to a second entity object, wherein the first entity object is an entity object corresponding to the first display position, and the second entity object is an entity object corresponding to the second display position.
5. The method according to any one of claims 1 to 4, wherein the second camera is located below a screen of the electronic device, and a position of the second camera below the screen corresponds to a position of a shooting control in the shooting preview interface;
after the updating the preview image, the method further comprises:
and responding to a second input of the shooting control by the user, and saving the preview image.
6. The method of claim 1, wherein the updating the preview image or saving the preview image comprises:
and updating the preview image or saving the preview image when the distance between the operation body and the screen of the electronic equipment is greater than or equal to a first threshold value and less than or equal to a second threshold value, wherein the first threshold value is less than the second threshold value.
7. The method of claim 6, further comprising:
and under the condition that the distance between an operation body and the screen is smaller than the first threshold value, fixing the focusing point of the first camera to a third entity object.
8. The method of claim 7, wherein after fixing the focus of the first camera to a third physical object, the method further comprises:
if the distance between an operation body and the screen is smaller than the first threshold and the moving distance of the electronic equipment in the space is larger than or equal to a third threshold, fixedly displaying a focusing mark at a third display position in the preview image;
and the entity object corresponding to the third display position is the third entity object.
9. An electronic device, comprising a receiving module and a processing module;
the receiving module is used for receiving a first input of a user through a second camera of the electronic equipment under the condition that a preview image acquired by the first camera of the electronic equipment is displayed on a shooting preview interface;
the processing module is used for responding to the first input received by the receiving module, and updating the preview image or saving the preview image;
the first input is input of an operation body in the space above a screen of the electronic equipment; the air separating operation is input by the movement of the operation body above the screen along different directions, or input by the stay of the operation body at a preset position above the screen for a preset time; the preset position is located in the acquisition range of the second camera.
10. The electronic device of claim 9,
the processing module is specifically configured to update a display position of a focusing identifier in the preview image, where the focusing identifier is used to indicate an entity object focused by the first camera.
11. The electronic device of claim 10,
the processing module is specifically configured to update the display position of the focusing identifier in the preview image from a first display position to a second display position according to the movement displacement of the first input.
12. The electronic device of claim 11,
the processing module is further configured to move the focus point of the first camera from a first entity object to a second entity object in response to the first input, where the first entity object is an entity object corresponding to the first display position, and the second entity object is an entity object corresponding to the second display position.
13. The electronic device according to any one of claims 9 to 12, wherein the second camera is located below a screen of the electronic device, and a position of the second camera below the screen corresponds to a position of a shooting control in the shooting preview interface;
the processing module is further configured to respond to a second input of the shooting control by the user after the preview image is updated, and save the preview image.
14. The electronic device of claim 9,
the processing module is specifically configured to update the preview image or save the preview image when a distance between an operation body and a screen of the electronic device is greater than or equal to a first threshold and less than or equal to a second threshold, where the first threshold is less than the second threshold.
15. The electronic device of claim 14,
the processing module is further configured to fix the focus of the first camera to a third entity object when the distance between the operating body and the screen is smaller than the first threshold.
16. The electronic device of claim 15,
the processing module is further configured to, after the focusing point of the first camera is fixed to a third physical object, fixedly display a focusing mark at a third display position in the preview image when a distance between an operating body and the screen is smaller than the first threshold and a moving distance of the electronic device in space is greater than or equal to a third threshold;
and the entity object corresponding to the third display position is the third entity object.
17. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the photographing method according to any of claims 1 to 8.
18. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, realizes the steps of the photographing method according to any one of claims 1 to 8.
CN201911422792.0A 2019-12-31 2019-12-31 Shooting method and electronic equipment Active CN111107271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911422792.0A CN111107271B (en) 2019-12-31 2019-12-31 Shooting method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911422792.0A CN111107271B (en) 2019-12-31 2019-12-31 Shooting method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111107271A CN111107271A (en) 2020-05-05
CN111107271B true CN111107271B (en) 2022-02-08

Family

ID=70426590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911422792.0A Active CN111107271B (en) 2019-12-31 2019-12-31 Shooting method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111107271B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112738402B (en) * 2020-12-30 2022-03-15 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108737731A (en) * 2018-05-30 2018-11-02 维沃移动通信有限公司 A kind of focusing method and terminal device
CN109328358A (en) * 2018-09-17 2019-02-12 深圳市汇顶科技股份有限公司 Shield lower Systems for optical inspection, electronic equipment and its object proximity detection method
CN110519512A (en) * 2019-08-16 2019-11-29 维沃移动通信有限公司 A kind of object processing method and terminal
CN110572575A (en) * 2019-09-20 2019-12-13 三星电子(中国)研发中心 camera shooting control method and device
CN110581910A (en) * 2019-09-17 2019-12-17 Oppo广东移动通信有限公司 Image acquisition method, device, terminal and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160131720A (en) * 2015-05-08 2016-11-16 엘지전자 주식회사 Mobile terminal and method for controlling the same
US10410037B2 (en) * 2015-06-18 2019-09-10 Shenzhen GOODIX Technology Co., Ltd. Under-screen optical sensor module for on-screen fingerprint sensing implementing imaging lens, extra illumination or optical collimator array
CN109960406B (en) * 2019-03-01 2020-12-08 清华大学 Intelligent electronic equipment gesture capturing and recognizing technology based on action between fingers of two hands
CN110166600B (en) * 2019-05-27 2020-12-04 Oppo广东移动通信有限公司 Electronic device and control method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108737731A (en) * 2018-05-30 2018-11-02 维沃移动通信有限公司 A kind of focusing method and terminal device
CN109328358A (en) * 2018-09-17 2019-02-12 深圳市汇顶科技股份有限公司 Shield lower Systems for optical inspection, electronic equipment and its object proximity detection method
CN110519512A (en) * 2019-08-16 2019-11-29 维沃移动通信有限公司 A kind of object processing method and terminal
CN110581910A (en) * 2019-09-17 2019-12-17 Oppo广东移动通信有限公司 Image acquisition method, device, terminal and storage medium
CN110572575A (en) * 2019-09-20 2019-12-13 三星电子(中国)研发中心 camera shooting control method and device

Also Published As

Publication number Publication date
CN111107271A (en) 2020-05-05

Similar Documents

Publication Publication Date Title
CN110913132B (en) Object tracking method and electronic equipment
CN109743498B (en) Shooting parameter adjusting method and terminal equipment
CN110908558B (en) Image display method and electronic equipment
CN110769155B (en) Camera control method and electronic equipment
KR20220092937A (en) Screen display control method and electronic device
CN110752981B (en) Information control method and electronic equipment
CN109257505B (en) Screen control method and mobile terminal
CN110944139B (en) Display control method and electronic equipment
CN111083375B (en) Focusing method and electronic equipment
CN109901761B (en) Content display method and mobile terminal
CN111142723A (en) Icon moving method and electronic equipment
CN110830713A (en) Zooming method and electronic equipment
EP4068750A1 (en) Object display method and electronic device
CN110753155A (en) Proximity detection method and terminal equipment
CN111124231B (en) Picture generation method and electronic equipment
CN110908750B (en) Screen capturing method and electronic equipment
CN110209324B (en) Display method and terminal equipment
CN109104573B (en) Method for determining focusing point and terminal equipment
CN110058686B (en) Control method and terminal equipment
CN108833791B (en) Shooting method and device
JP7413546B2 (en) Photography method and electronic equipment
CN110913133B (en) Shooting method and electronic equipment
CN110647506B (en) Picture deleting method and terminal equipment
CN111107271B (en) Shooting method and electronic equipment
CN109829707B (en) Interface display method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant