CN115209027A - Camera focusing method and electronic equipment - Google Patents

Camera focusing method and electronic equipment Download PDF

Info

Publication number
CN115209027A
CN115209027A CN202110322579.3A CN202110322579A CN115209027A CN 115209027 A CN115209027 A CN 115209027A CN 202110322579 A CN202110322579 A CN 202110322579A CN 115209027 A CN115209027 A CN 115209027A
Authority
CN
China
Prior art keywords
electronic device
camera
face
distance
electronic equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110322579.3A
Other languages
Chinese (zh)
Inventor
张帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110322579.3A priority Critical patent/CN115209027A/en
Publication of CN115209027A publication Critical patent/CN115209027A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses a camera focusing method and electronic equipment, wherein a user wears second electronic equipment (such as a Bluetooth headset), the second electronic equipment sends a Bluetooth signal to first electronic equipment, and when the user opens a camera program to take a picture of a person, the first electronic equipment can determine position information of the second electronic equipment relative to the first electronic equipment according to the received Bluetooth signal, and then the camera is focused according to the position information. Therefore, no matter the human face is partially or completely shielded, the light condition is not good, or the human face cannot be recognized by the human face recognizer, the first electronic device can always determine a stable focusing point according to the position information, and then focus the camera according to the focusing point.

Description

Camera focusing method and electronic equipment
Technical Field
The present application relates to the field of mobile terminal technologies, and in particular, to a camera focusing method and an electronic device.
Background
In recent times, the demand that smart phone users take pictures and take videos by using mobile phones is increasing, and higher requirements are also placed on the shooting experience. In order to make the shot specific scene clearer, the specific scene can be used as a focusing point, and focusing is completed through a focusing module of the smart phone according to the focusing point.
The implementation schemes of the automatic focusing technology of the smart phone are roughly classified into three types, namely contrast focusing (CDAF), phase focusing (PDAF) and laser-assisted focusing. The principle of contrast focusing is that the contrast of an adjacent pixel point is the largest after focusing is successfully carried out, based on the assumption, a focusing point is determined in the focusing process, then the contrast of the focusing point and the adjacent pixel point is judged, and the focusing is successfully judged when a local gradient maximum value is obtained after a focusing motor is repeatedly moved; the principle of phase focusing is that some shielding pixel points are reserved on a photosensitive element and are specially used for phase detection, a focusing point is determined in the focusing process, and a focusing offset value is determined according to the distance between pixels, the change of the distance between pixels and the like so as to realize focusing; the principle of laser-assisted focusing is to determine a focus point during focusing, to which a set of infrared laser beams is emitted from a photographing apparatus, and to calculate the distance based on the time of receiving the return laser beams. As can be seen from the above description, the determination of the focus point in the autofocus technology is a problem that all autofocus solutions need to solve.
However, limited by the hardware and software capabilities of current cell phones, the above techniques may not accurately determine the focus point in some scenarios.
For example, a person is one of the highest frequency contents when a user takes a picture or a video using a smartphone. The method can accurately focus on the human face in different environments, and is a basic requirement for photographing and video creation by using a camera of a smart phone. At present, a commonly used technical scheme is that a face is detected through a face detection function in a camera system of a mobile phone, the face is used as a focusing point, the position of the focusing point can be determined as a position depicted by a key point in a face feature, the position of the focusing point is further transmitted to a focusing module of the camera system of the mobile phone, and the focusing module is used for completing the focusing of the face.
However, in a shooting environment with many faces, an environment in which a specific face is partially or completely blocked, and a shooting environment with poor light conditions, the above-mentioned techniques are limited by software and hardware capabilities of the current mobile phone, and often cannot identify the position of the specific face quickly and accurately, so that a stable focusing point cannot be determined, and the problem that the camera cannot focus accurately is caused.
Disclosure of Invention
The embodiment of the application discloses a camera focusing method and electronic equipment. Therefore, the problem that the electronic equipment cannot accurately focus the camera due to the fact that a stable focusing point cannot be determined is solved.
In a first aspect, an embodiment of the present application provides a method for focusing a camera, which is applied to a first electronic device, where the first electronic device includes a camera and at least 2 antennas, and the method includes: receiving Bluetooth signals sent by second electronic equipment through the at least 2 antennas; under the condition that the camera is turned on, in response to user operation input aiming at a Bluetooth positioning control, determining the position information of the second electronic equipment relative to the first electronic equipment according to the received Bluetooth signals of the at least 2 antennas; focusing the camera based on the position information.
According to the method, the first electronic device determines the position information of the second electronic device relative to the first electronic device by calculating the Bluetooth signal from the second electronic device, and further focuses the camera according to the position information. Therefore, no matter the face of the user is partially or completely shielded, the light condition is not good enough or the face of the user cannot be recognized by the face recognizer, the first electronic device can always determine a stable focusing point according to the position information, and then the camera is focused according to the focusing point, so that the focusing success rate is improved.
In combination with the first aspect, in some embodiments, the position information of the second electronic device relative to the first electronic device includes a distance of the second electronic device from a plane in which the camera is located and orientation information of the second electronic device relative to the first electronic device; one implementation of the focusing of the camera based on the position information may be: the first electronic equipment corrects the distance according to the azimuth information of the second electronic equipment relative to the first electronic equipment; the first electronic device focuses the camera through the corrected distance.
The user wears the second electronic device, the position of the second electronic device is close to the position of the face of the user, and the position of the face of the user can be approximated to the position of the second electronic device, so that the first electronic device can focus the camera according to the position information of the second electronic device relative to the first electronic device. Because the focal plane of the camera is a curved surface, compared with the method of focusing the camera according to the distance from the second electronic device to the plane where the camera is located, the method of focusing the camera by the first electronic device according to the corrected distance can obtain a better focusing result, and thus a clearer clear image at the position where the second electronic device is located can be obtained.
With reference to the first aspect, in some embodiments, one implementation manner of the correcting the distance according to the orientation information of the second electronic device relative to the first electronic device may be: and the first electronic equipment determines the distance between the second electronic equipment and the first electronic equipment as the corrected distance according to the distance and the azimuth information of the second electronic equipment relative to the first electronic equipment.
Because the first electronic device can acquire a clearer image of the position where the second electronic device is located according to the distance focusing between the second electronic device and the first electronic device, the first electronic device can determine the distance between the second electronic device and the first electronic device through the implementation mode.
With reference to the first aspect, in some embodiments, the position information includes a distance of the second electronic device from a plane in which the camera is located and orientation information of the second electronic device with respect to the first electronic device, and the method may further include: the method comprises the steps that a first electronic device displays a preview image in real time under the condition that a camera is started;
one implementation manner of focusing the camera based on the position information may be: the first electronic equipment determines the position of the second electronic equipment on the preview image according to the azimuth information of the second electronic equipment relative to the first electronic equipment; determining a face frame on the preview image according to the position of the second electronic equipment on the preview image; focusing the camera based on the position of the face frame and the distance between the target face and the plane where the camera is located, wherein the position of the face frame is the position of the geometric center of the face frame on the preview image, the target face is the face in the face frame in the preview image, and the distance between the target face and the plane where the camera is located is the distance between the second electronic device and the plane where the camera is located.
The user wears the second electronic equipment, the position of the second electronic equipment is close to the face position of the user, the first electronic equipment can determine the face frame according to the position of the second electronic equipment on the preview image, and further the camera is focused according to the position of the face frame and the distance between the target face and the plane distance where the camera is located, so that the camera can be focused more accurately, clear images of the face position of the user are obtained, and the user experience is improved.
In combination with the first aspect, in some embodiments, the method may further include: and the first electronic equipment displays the face frame on the face position of the preview image.
Displaying the face frame on the preview image may enable a user to more intuitively see a focusing position in the image, and in some embodiments, the first electronic device may further adjust the position of the face frame according to an operation input by the user on the face frame, for example, by touching a position on the preview image, where focusing is required.
With reference to the first aspect, in some embodiments, one implementation manner of determining a face frame on the preview image according to the position of the second electronic device on the preview image may be: the first electronic equipment determines the size of the face frame according to the distance between the target face and the plane where the camera is located and the size of the face model, wherein the size of the face frame is inversely proportional to the distance between the target face and the plane where the camera is located; and the first electronic equipment determines the position of the face frame on the preview image according to the position of the second electronic equipment on the preview image and the size of the face frame.
The first electronic equipment can determine the position and the size of a face frame on a preview image according to the size of the face model and the distance between the target face of the user and the plane where the camera is located, and the accurate position can enable the first electronic equipment to determine the accurate distance, so that accurate focusing can be achieved; the first electronic equipment can also determine a focusing range according to the size of the face frame, so that a clear image at the face position is obtained.
With reference to the first aspect, in some embodiments, after determining the face frame on the preview image, the method may further include: the first electronic equipment inputs the image in the face frame into a face recognizer, and when the face recognizer recognizes face features, the recognition result is output as a face; and adjusting the size of the face frame to be the size of the face by the first electronic equipment, wherein the position of the face frame is the position of the face.
Due to the fact that the sizes of the faces of the users are different, the determined position and size of the face frame can be more accurate by combining the face recognition result.
In combination with the first aspect, in some embodiments, the second electronic device is a pair of bluetooth headsets including a left headset and a right headset, the position information of the second electronic device relative to the first electronic device includes distances of the left headset and the right headset to the camera plane, respectively, and orientation information of the left headset and the right headset relative to the first electronic device, respectively;
one implementation of the focusing of the camera based on the position information may be: the method comprises the steps that first electronic equipment determines that the distance between a target face and a plane where a camera is located is the average value of the distances between a left earphone and a right earphone and the plane where the camera is located, and the target face is a face wearing the left earphone and the right earphone; the first electronic equipment determines the position of the left earphone on the preview image according to the azimuth information of the left earphone relative to the first electronic equipment; the first electronic equipment determines the position of the right earphone on the preview image according to the orientation information of the right earphone relative to the first electronic equipment; the first electronic equipment determines a face frame by taking the position of the left earphone on the preview image and the position of the right earphone on the preview image as boundaries; the first electronic equipment focuses the camera based on the position of the face frame and the distance between the target face and the plane where the camera is located.
Through the position information of the left earphone and the right earphone, the first electronic equipment can determine the position of the face frame more quickly, and the focusing speed of the first electronic equipment to the camera is further improved.
With reference to the first aspect, in some embodiments, one implementation manner of focusing the camera based on the position of the face frame and the distance between the target face and the plane where the camera is located may be: the first electronic equipment determines the distance between the camera and the target face according to the position of the face frame and the distance between the target face and the plane where the camera is located; the first electronic equipment focuses the camera through the distance between the camera and the target face.
Because the focal plane of the camera of the first electronic device is a curved surface, compared with focusing the camera by the distance from the target face to the plane where the camera is located, a clearer image at the target face can be obtained by focusing according to the distance between the camera and the target face.
With reference to the first aspect, in some embodiments, one implementation manner of the determining the location information of the second electronic device relative to the first electronic device according to the received bluetooth signals of the at least 2 antennas may be: the first electronic device determines an arrival angle of the Bluetooth signal to each antenna according to the wavelength of the Bluetooth signal, the phase difference of the Bluetooth signal received by each 2 antennas in the at least 2 antennas and the position of each antenna on the first electronic device; and the first electronic equipment determines the position information of the second electronic equipment relative to the first electronic equipment according to the arrival angle of the Bluetooth signal to each antenna and the position of each antenna on the first electronic equipment.
The first electronic device can determine the position information of the second electronic device relative to the first electronic device in real time through the positioning method of the arrival angle of the Bluetooth signal, and can determine a stable focusing point according to the position information, so that the first electronic device can more quickly and accurately focus the camera.
With the application of the first aspect and the camera focusing method provided in the possible implementation manner of the first aspect, a user wears a second electronic device (for example, a bluetooth headset), the second electronic device sends a bluetooth signal to the first electronic device, and when the user opens a camera program to photograph himself, the first electronic device can determine position information of the second electronic device relative to the first electronic device according to the received bluetooth signal, and then focus the camera according to the position information. Therefore, no matter the human face is partially or completely shielded, the light condition is not good enough, or the human face cannot be recognized by the human face recognizer, the first electronic device can always determine a stable focusing position, and then the camera is focused accurately according to the position. Moreover, in the shooting process, the first electronic device calculates the position of the second electronic device in real time, so that when the first electronic device or the second electronic device moves, the first electronic device can still quickly determine the position of the second electronic device, and then focusing is performed according to the position.
In a second aspect, an embodiment of the present application provides an electronic device, including: one or more processors, one or more memories, a camera and at least 2 antennas, the camera, the one or more memories being respectively coupled with the one or more processors; the camera is used for acquiring images; the at least 2 antennas are used for receiving Bluetooth signals; the one or more memories are for storing computer program code comprising computer instructions;
the processor is configured to invoke the computer instructions to perform the following operations: in response to a user operation input for a Bluetooth positioning control, determining position information of the second electronic device relative to the first electronic device according to the received Bluetooth signals of the at least 2 antennas; focusing the camera based on the position information.
In combination with the second aspect, in some embodiments, the position information includes a distance of the second electronic device from a plane in which the camera is located and orientation information of the second electronic device with respect to the first electronic device; the processor performs the focusing of the camera based on the position information, including performing: correcting the distance according to the azimuth information of the second electronic equipment relative to the first electronic equipment; and focusing the camera through the corrected distance.
With reference to the second aspect, in some embodiments, the processor performs correcting the distance according to the orientation information of the second electronic device with respect to the first electronic device, including performing: and determining the distance between the second electronic equipment and the first electronic equipment as the corrected distance according to the distance and the azimuth information of the second electronic equipment relative to the first electronic equipment.
In combination with the second aspect, in some embodiments, the position information includes a distance of the second electronic device from a plane in which the camera is located and orientation information of the second electronic device with respect to the first electronic device, and the processor further performs: displaying a preview image in real time under the condition that the camera is turned on;
the processor performs focusing the camera based on the position information, including performing: determining the position of the second electronic equipment on the preview image according to the orientation information of the second electronic equipment relative to the first electronic equipment; determining a face frame on the preview image according to the position of the second electronic equipment on the preview image; focusing the camera based on the position of the face frame and the distance between the target face and the plane where the camera is located, wherein the position of the face frame is the position of the geometric center of the face frame on the preview image, the target face is the face in the face frame in the preview image, and the distance between the target face and the plane where the camera is located is the distance between the second electronic device and the plane where the camera is located.
In combination with the second aspect, in some embodiments, the processing further comprises performing: and displaying the face frame on the face position of the preview image.
With reference to the second aspect, in some embodiments, the processor performs determining a face frame on the preview image according to a position of the second electronic device on the preview image, including performing: determining the size of the face frame according to the distance between the target face and the plane where the camera is located and the size of the face model, wherein the size of the face frame is inversely proportional to the distance between the target face and the plane where the camera is located; and determining the position of the face frame on the preview image according to the position of the second electronic equipment on the preview image and the size of the face frame.
With reference to the second aspect, in some embodiments, after determining the face box on the preview image, the processor further performs: inputting the image in the face frame into a face recognizer, and outputting a recognition result as a face when the face recognizer recognizes the face features; and adjusting the size of the face frame to be the size of the face, wherein the position of the face frame is the position of the face.
In combination with the second aspect, in some embodiments, the second electronic device is a pair of bluetooth headsets including a left headset and a right headset, the position information of the second electronic device relative to the first electronic device includes distances of the left headset and the right headset, respectively, to the camera plane, and orientation information of the left headset and the right headset, respectively, relative to the first electronic device;
the processor performs focusing the camera based on the position information, including performing: determining the distance between a target face and the plane where the camera is located as the average value of the distances between the left earphone and the right earphone and the plane where the camera is located, wherein the target face is a face wearing the left earphone and the right earphone; determining the position of the left earphone on the preview image according to the azimuth information of the left earphone relative to the first electronic equipment; determining the position of the right earphone on the preview image according to the azimuth information of the right earphone relative to the first electronic equipment; determining a face frame by taking the position of the left earphone on the preview image and the position of the right earphone on the preview image as boundaries; focusing the camera based on the position of the face frame and the distance between the target face and the plane where the camera is located.
With reference to the second aspect, in some embodiments, the processor performs focusing of the camera based on the position of the face frame and the distance of the target face from the plane in which the camera is located, including performing: determining the distance between the camera and the target face according to the position of the face frame and the distance between the target face and the plane where the camera is located; focusing the camera according to the distance between the camera and the target face.
In combination with the second aspect, in some embodiments, the processor performs determining the position information of the second electronic device relative to the first electronic device from the received bluetooth signals of the at least 2 antennas, including performing: determining an arrival angle of the Bluetooth signal to each antenna according to the wavelength of the Bluetooth signal, the phase difference of the Bluetooth signal received by each 2 antennas of the at least 2 antennas and the position of each antenna on the first electronic device; and determining the position information of the second electronic equipment relative to the first electronic equipment according to the arrival angle of the Bluetooth signal to each antenna and the position of each antenna on the first electronic equipment.
In a third aspect, an embodiment of the present application is a computer storage medium, which includes computer instructions, and when the computer instructions are executed on an electronic device, the electronic device is caused to perform the method for focusing a camera as described in the first aspect and any possible implementation manner of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product including instructions, which, when run on an electronic device, cause the electronic device to perform a method as described in the first aspect and any one of the possible implementation manners of the first aspect.
It is to be understood that the electronic device provided by the second aspect, the computer-readable storage medium provided by the third aspect, and the computer program product provided by the fourth aspect are all configured to perform the method provided by the first aspect. Therefore, the beneficial effects achieved by the method can refer to the beneficial effects in the corresponding method, and are not described herein again.
Drawings
Fig. 1 is a schematic diagram of a camera focusing system provided in an embodiment of the present application;
fig. 2 is a schematic diagram of an angle of arrival of a bluetooth signal according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a user interface of a Bluetooth device according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a user interface of a camera program of an electronic device according to an embodiment of the present application;
fig. 5A is a schematic flowchart of a method for focusing a camera according to an embodiment of the present disclosure;
5B-5D are schematic flow charts of several methods for focusing the camera provided by the embodiments of the present application;
fig. 6 is a schematic diagram illustrating a bluetooth positioning method according to an embodiment of the present application;
fig. 7 is a schematic diagram for determining a position of a face frame on a preview image according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a camera imaging system provided in an embodiment of the present application;
FIG. 9 is a schematic diagram of an interface for determining a face frame according to an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a distance principle for determining focusing according to an embodiment of the present application;
fig. 11 is a schematic diagram of a process for determining a face frame on a preview image according to two earphones according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of another camera focusing system according to an embodiment of the present application.
Detailed Description
The terminology used in the following embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in the specification of this application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the listed items.
Application scenarios related to embodiments of the present application are described below. In a picture taking or video recording scene, focusing is often needed to make a shot image clearer. For example, when a user turns on a camera of a smart phone and shoots himself, the focus of the camera is expected to be located on the face of the user, so that the user can shoot a relatively clear face. At this time, the user can wear a bluetooth device with a bluetooth function, such as a bluetooth headset, and input an operation to a "bluetooth positioning" control in a camera program, and the smartphone responds to the operation to calculate the position of the bluetooth headset in real time, and further, the smartphone can focus the camera according to the position of the bluetooth headset. The intelligent mobile phone comprises a Bluetooth headset, a smart phone body and a focusing lens, wherein the Bluetooth headset is worn by a user and is arranged at the ear of the user, so that the position of a face of the smart phone body can be quickly calculated according to the position of the Bluetooth headset, and the position of the face of the smart phone body is used as a focusing point for focusing.
Fig. 1 is a schematic diagram of a camera focusing system, which is an example of taking a picture of a person, as shown in fig. 1, a user 101 wears a second electronic device 12, a first electronic device 11 and the second electronic device 12 are connected through bluetooth, and the second electronic device 12 continuously sends a bluetooth signal to the first electronic device 11. The user opens the camera program of the first electronic device 11, which may, for example, display an interface as shown in the interface 10 in fig. 1 when the user uses the camera to photograph himself. The interface 10 includes: preview images 106, bluetooth on indication 108, function selection controls such as large aperture 102A, video recording 102B, photograph 102C, bluetooth positioning 102D, more 102E, photo album 105, photograph control 103, and conversion camera 104, and navigation keys such as return key 1071, home screen button 1072, and call out task history button 1073. The preview image 106 is an image acquired by the first electronic device 11 in real time through a camera. It should be understood that the interface 10 may also include other control elements for operation, which is not limited in the embodiments of the present application. Any one of the controls may be used to respond to a user operation, such as a touch operation, so that the first electronic device 11 starts a function corresponding to the control.
When the user inputs an operation to the control "bluetooth positioning 102D", the first electronic device 11 calculates a bluetooth signal from the second electronic device 12 in real time in response to the operation directed to the control to determine the position information of the second electronic device 12, since the user 101 wears the second electronic device 12, the position of the face of the user can be calculated from the position of the second electronic device 12, and the distance from the face of the user wearing the second electronic device 12 to the plane where the camera of the first electronic device 11 is located can be the distance from the second electronic device 12 to the plane where the camera of the first electronic device 11 is located. Then, the position and size of the face frame are determined on the preview image 106 according to the position information of the second electronic device 12 and the size of the face model, and the face frame is displayed. Further, the first electronic device 11 focuses the camera according to the position of the face frame and the distance between the face of the user wearing the second electronic device 12 and the plane where the camera of the first electronic device 11 is located. The specific focusing method of the camera can be referred to the following description of fig. 5A.
In the above process of focusing the human camera, the first electronic device 11 may determine the position of the second electronic device 12 by calculating the received bluetooth signal sent by the second electronic device 12, and use the position information of the second electronic device 12 as a stable focusing point input source, so as to avoid that a stable focusing point cannot be determined in a shooting environment with a lot of faces, an environment in which a specific face is partially or completely blocked, and a shooting environment with poor light conditions, thereby causing a problem that the camera cannot be focused accurately. Since the user wears the second electronic device 12 and continuously transmits the bluetooth signal to the first electronic device 11, the first electronic device 11 can focus on the camera through the position of the second electronic device 12 in real time, and the first electronic device 11 can always calculate the position of the second electronic device 12 in real time no matter the user moves or the first electronic device 11 moves. Moreover, the first electronic device 11 can determine the position of the second electronic device quickly according to the received bluetooth signal, so that the problem that the camera cannot determine the focusing point quickly is avoided, and the focusing speed of the first electronic device on the camera is increased.
The first electronic device 11 and the second electronic device 12 may be terminal devices having a bluetooth function. The first electronic device 11 is a bluetooth device including at least 2 antennas, the first electronic device 11 may specifically be a terminal device such as a smart phone, a tablet computer, a notebook computer, and an intelligent home with a camera, and the second electronic device is a bluetooth device including a single antenna. The second electronic device 12 may specifically be a terminal device such as a bluetooth headset, a bluetooth speaker, a bluetooth bracelet, or a bluetooth lamp.
It should be understood that the camera focusing method provided by the embodiment of the present application may also be used for shooting other objects, and at this time, the second electronic device needs to be placed on the object to be shot, and the position of the object to be shot is calculated according to the position of the second electronic device, and the object can be clearly shot by applying the camera focusing method provided by the embodiment of the present application.
For ease of understanding, before describing the method for focusing a camera provided in the embodiments of the present application, the following describes related terms and principles related to the embodiments of the present application.
1. Angle of arrival (AoA) technique
The sending end sends a fixed frequency Extension signal (CTE) through a single antenna device, the receiving end is a multi-antenna array, different receiving antennas are switched according to a certain sequence when the CTE is received, and the size of the arrival angle of the Bluetooth signal is obtained through the mode.
2. Principle of AoA orientation
As shown in fig. 2, taking two antennas at the receiving end as an example for explanation, the antenna 1 and the antenna 2 receive the bluetooth signal, θ is the arrival angle of the bluetooth signal at the antenna, λ is the wavelength of the bluetooth signal, and d is the distance between the antenna 1 and the antenna 2. It can be seen from the figure that the transmission distance of the signal received by the antenna 1 is smaller than that of the signal received by the antenna 2 by d cos θ, and the difference in the transmission distance can also be obtained by the phase difference between the bluetooth signal and the signals of the antenna 1 and the antenna 2
Figure BDA0002993401230000081
To calculate as
Figure BDA0002993401230000082
In this way it is possible to obtain,
Figure BDA0002993401230000083
then θ = arccos
Figure BDA0002993401230000084
Therefore, by sampling the phase information of the two antennas at the receiving end and calculating their phase difference
Figure BDA0002993401230000085
The angle of arrival θ can be deduced.
The user interface provided by the embodiments of the present application is described below.
The user interface may be the user interface of the first electronic device 11 shown in fig. 1, please refer to fig. 3, and fig. 3 is a schematic view of a user interface of a bluetooth setting according to an embodiment of the present application. Illustratively, as shown in fig. 3 (a), an interface 20 is provided for bluetooth. The bluetooth settings interface 20 includes a bluetooth on and off control 201, a settings entry 202 for the paired device (my headset), and a settings entry for the available device (huaboei AM 08). When the bluetooth of the first electronic device is off, the first electronic device turns on the bluetooth in response to the operation on the bluetooth turn-on and turn-off control 201, and displays a bluetooth turn-on indication 203 as shown in the interface 20. The first electronic device displays the setting interface 21 of the connected bluetooth device as shown in (B) in fig. 3 in response to a user operation, for example, a touch operation, acting on the setting entry 202 of my headset. The interface 21 may include a positioning open and close control 211, a media audio open and close control 212, a call audio open and close control 213, and a bluetooth open indication 214. The first electronic device may receive a bluetooth signal transmitted by the second electronic device in response to a user operation for positioning the on-and-off control 211.
Referring to fig. 4, fig. 4 is a schematic view of a user interface of a camera according to an embodiment of the present disclosure, where the interface 30 in fig. 4 includes: preview image 301, bluetooth on indication 306, function selection controls such as large aperture 302A, video 302B, take 302C, bluetooth position 302D, more 302E, photo album 305, take control 303, and transition camera 304. The preview image 301 is an image acquired by the first electronic device in real time through the camera. It should be understood that the interface 30 may also include other operational controls, which are not limited in the embodiments of the present application. Any one of the controls can be used for responding to the operation of the user, such as a touch operation, so that the first electronic equipment starts the function corresponding to the control. The first electronic device may calculate position information of the second electronic device relative to the first electronic device from the received bluetooth signal in response to a user operation input for the bluetooth positioning 302D. And the first electronic device can also respond to the operation input by the control for starting the camera program and calculate the position information of the second electronic device according to the received Bluetooth signal from the second electronic device.
The method for focusing the camera provided by the embodiment of the present application is specifically described below with reference to fig. 5A. Fig. 5A is a schematic flowchart of a method for focusing a camera according to an embodiment of the present disclosure, where the method may be implemented by the system shown in fig. 1, as shown in fig. 5A, the method includes, but is not limited to, the following steps:
and S01, the second electronic equipment sends a Bluetooth signal to the first electronic equipment.
The second electronic device transmits bluetooth signals to the first electronic device via the single antenna at a dense, fixed frequency (e.g., once every 6ms, once every 10 ms).
S02, the first electronic device continuously receives the Bluetooth signal from the second electronic device.
The user opens the camera program of the first electronic device, the first electronic device displays the interface 30 shown in fig. 4, displays the preview image acquired by the camera in real time, and the first electronic device performs S03 when the user performs an input operation, such as a touch operation, on the bluetooth positioning control in the interface 30.
And S03, the first electronic equipment responds to the input operation aiming at the Bluetooth positioning control and calculates the position information of the second electronic equipment relative to the first electronic equipment according to the received Bluetooth signal. The position information includes a distance of the second electronic device from a plane in which a camera of the first electronic device is located and orientation information of the second electronic device with respect to the first electronic device.
In one implementation, a first electronic device may determine location information of a second electronic device from an angle of arrival θ of a bluetooth signal from the second electronic device and a triangulation method. Taking the example that the first electronic device includes 2 antennas, the specific principle is as follows:
as shown in fig. 6, with the geometric center of the camera of the first electronic device as the origin of the coordinate system, the coordinates of the antenna 1 (S1) are (X1, Y1, Z1), the coordinates of the antenna 2 (S2) are (X2, Y2, Z2), the azimuth angle of the second electronic device based on the antenna 1 is β 1, the pitch angle is ξ 1, the azimuth angle of the second electronic device based on the antenna 2 is β 2, and the pitch angle is ξ 2. Assuming that the coordinates of the second electronic device T are (X, Y, Z), the coordinates of the projection T' of the second electronic device on the XY plane are (X, Y, 0). Wherein β 1, ξ 1, β 2, and ξ 2 may be determined by the arrival angle θ 1 of the bluetooth signal at the antenna 1 and the arrival angle θ 2 of the bluetooth signal at the antenna 2, and the key angles in fig. 6 may be represented by the coordinates of the antenna 1 and the antenna 2 and the coordinates of the second electronic device as follows:
Figure BDA0002993401230000091
Figure BDA0002993401230000092
Figure BDA0002993401230000093
Figure BDA0002993401230000094
as can be seen from figure 6 of the drawings,
distance between two base stations
Figure BDA0002993401230000095
In the triangle S1T' S2, we can derive from the sine theorem:
Figure BDA0002993401230000096
thereby, the device
Figure BDA0002993401230000097
In triangle TT' S, the distance of T from S1 is obtained
Figure BDA0002993401230000098
Thus, X = R' cos ξ 1cos β 1+ X1; y = R' cos ξ 1sin β 1+ Y1; z = R' sin xi 1+ Z1;
up to this point, the coordinates (X, Y, Z) of the second electronic device may be obtained. Wherein Z may represent a distance of the second electronic device from a plane in which the camera of the first electronic device is located, and the orientation information of the second electronic device with respect to the first electronic device may be represented by X, Y and Z.
In another implementation, with a geometric center of a camera of the first electronic device as a coordinate origin, the first electronic device may calculate, according to an arrival angle of the bluetooth signal at least 2 antennas of the first electronic device, a distance of the second electronic device from a plane in which the camera of the first electronic device is located, and an angle (azimuth, pitch) of the second electronic device with respect to the geometric center of the camera of the first electronic device.
It should be understood that the above two implementation manners are merely examples, and the embodiment of the present application is not limited to the method for calculating the distance between the second electronic device and the plane where the camera of the first electronic device is located and the orientation information of the second electronic device relative to the first electronic device.
And S04, the first electronic equipment focuses the camera according to the position information of the second electronic equipment relative to the first electronic equipment.
Wherein focusing the camera may be adjusting a distance between the camera and the image sensor in the first electronic device such that an image formed on the image sensor is clearer. The distance between the camera and the image sensor may be determined from position information of the second electronic device relative to the first electronic device.
S04 may be implemented by, but is not limited to, the following:
implementation mode (one): the first electronic equipment determines a face frame on the preview image according to the position information of the second electronic equipment relative to the first electronic equipment, and then the camera can be focused based on the position of the face frame and the distance between the target face and the plane where the camera is located. The distance between the target face and the plane where the camera is located is the distance Z between the second electronic device and the plane where the camera is located, and the target face is the face of a user wearing the second electronic device.
For example, taking the second electronic device as a bluetooth headset as an example, the bluetooth headset may be one of the two headsets equipped with one bluetooth signal transmitting antenna, or both headsets are equipped with one bluetooth transmitting antenna, and the following respectively describes a manner of focusing the camera according to the position information of the one headset and a manner of focusing the camera according to the position information of the two headsets.
In some embodiments, the location information of the second electronic device is location information of a headset. Illustratively, the process of focusing on the camera is described with the coordinates (X, Y, Z) of the second electronic device obtained in the first implementation in S03 as the position information of the second electronic device relative to the first electronic device. The method for focusing the camera as shown in fig. 5B, in which the first electronic device focuses the camera based on the position information, may include the following steps:
s041a: and the first electronic equipment determines the position of the second electronic equipment on the preview image according to the orientation information of the second electronic equipment relative to the first electronic equipment.
As shown in fig. 7, the first electronic device determines the position (x, y) of the second electronic device on the preview image according to the orientation information of the second electronic device relative to the first electronic device and the camera imaging principle (as shown in fig. 8). As shown in fig. 8, an angle α may represent orientation information of the second electronic device relative to the first electronic device, the angle α is an included angle between a connection line of the second electronic device and a camera of the first electronic device and the Z axis,
Figure BDA0002993401230000101
the preview image is an image received by the image sensor, and the first electronic device can calculate the position (x, y) of the second electronic device on the preview image according to the distance between the image sensor and the camera and the angle alpha.
In some embodiments, the first electronic device may further determine the position of the second electronic device on the preview image according to a mapping relationship between a plane (a plane parallel to the plane of the camera) where the second electronic device is located and the image sensor of the first electronic device and position information (X, Y, Z) of the second electronic device relative to the first electronic device.
S041b: and the first electronic equipment determines a face frame on the preview image according to the position of the second electronic equipment on the preview image and the distance between the target face and the plane where the camera is located.
In some embodiments, the process of the first electronic device determining the face frame on the preview image may be:
1) The first electronic equipment determines the size w x h of the face frame according to the distance between the target face and the plane where the camera is located and the size of the face model, wherein w is the width of the face frame, and h is the height of the face frame. The size of the face frame is inversely proportional to the distance between the target face and the plane where the camera is located. The size of the face model can be an empirical size determined according to the sizes of a large number of faces, and can also be a size set by the user.
2) And the electronic equipment determines the position of the face frame on the preview image according to the position of the second electronic equipment on the preview image and the size of the face frame.
In one implementation, the determination process of the position of the face frame may be: and acquiring the position, such as the left ear or the right ear, of the second electronic equipment worn by the user, which is input by the user, and determining the geometric center of the face frame according to the position of the second electronic equipment on the preview image. For example, as shown in fig. 7, the position 701 of the second electronic device is (x, y), when the user wears the headset with the bluetooth transmitting antenna on the right ear, the width of the face frame is w, and then the abscissa x '= x-w/2 and the ordinate y' = y of the position 702 of the face frame determine the position 702 (x ', y') of the face frame, which is the geometric center of the face frame. Similarly, when the user wears the earphone with the Bluetooth transmitting antenna on the right ear, the width of the face frame is w, and the position 702 of the face frame is determined to be (x + w/2,y).
In some implementations, after the first electronic device determines the rough face frame, a face may be recognized in a face recognition manner, and the size and the position of the face frame may be further adjusted according to the recognized face, which may be: and inputting the image in the face frame into a face recognizer, outputting a recognition result as a face when the face recognizer recognizes the face features, and determining the central position of the face and the size of the face according to the face features. Furthermore, the first electronic device adjusts the size of the face frame to be the size of the face, and the position of the face frame is the position of the face. The face recognizer can extract the features of the input image and output a recognition result according to the matching degree of the extracted features and the face features; the face recognizer may also output a recognition result according to a correlation value between an input image and a template of a face feature. The human face features can be features of human face five sense organs (eyes, nose, mouth and the like), skin color, contour, texture and the like which are not changed under different states.
In some embodiments, the process of the first electronic device determining the face frame on the preview image may be:
and setting the face frame by taking the position of the second electronic equipment in the preview image as a center and twice the width (2 w) and the height h of the face model. Inputting the image in the face frame into a face recognizer, outputting a recognition result as a face when the face recognizer recognizes the face features, determining the central position of the face and the size of the face according to the face features, adjusting the size of the face frame to be the size of the recognized face, and adjusting the position of the face frame to be the central position of the recognized face.
In other embodiments, the process of the first electronic device determining the face frame on the preview image may further be: the first electronic equipment determines a range by taking the position of the second electronic equipment in the preview image as a center and taking the width (2 w) of the double face model as a radius, and the images in the range in the preview image are input into the face recognizer. When the face recognizer recognizes the face features, the recognition result is output as a face, the central position of the face and the size of the face are determined according to the face features, the size of a face frame is further determined as the size of the recognized face, and the position of the face frame is the central position of the recognized face.
It should be understood that, the implementation manner of determining the face frame according to the position of the first electronic device on the preview image is not limited to the above, and the embodiment of the present application is not limited thereto.
After the first electronic device determines the position of the face frame, as shown in fig. 9, the face frame 901 may be displayed in the preview image.
S041c: the first electronic equipment focuses the camera based on the position of the face frame and the distance between the target face and the plane where the camera is located, the position of the face frame is the position of the geometric center of the face frame on the preview image, and the target face is the face in the face frame in the preview image, namely the face of a user wearing the second electronic equipment.
Specifically, in one implementation, the first electronic device may determine a direction of a position of the face frame on the preview image relative to the camera, where the direction may be an included angle between a connection line between the position of the face frame and the camera and a Z-axis, and then determine a distance between the camera and the target face according to the included angle and a distance between the target face and a plane where the camera is located, and the first electronic device may focus the camera based on the distance between the camera and the target face. For example, the distance between the camera and the target face may be determined in the following manner, as shown in fig. 10, where the position of the face frame on the preview image is (X ', Y'), the distance between the preview image and the camera (which may also be referred to as an image sensor and a camera) is L, the position of the target face is (X ', Y', Z), and the distance Z between the target face and the plane where the camera is located, an angle η between a connecting line of the position of the face frame and the camera and the Z-axis may be determined,
Figure BDA0002993401230000111
and then, the distance Z between the camera and the target face is determined by combining the distance Z between the target face and the plane where the camera is located, as can be seen from fig. 10:
Figure BDA0002993401230000112
in some embodiments, the second electronic device includes a left earphone and a right earphone, the left earphone and the right earphone respectively include a bluetooth transmitting antenna, and the position information of the second electronic device relative to the first electronic device includes distances from the left earphone and the right earphone to a plane where the camera is located, and orientation information of the left earphone and the right earphone relative to the first electronic device. The first electronic device may focus the camera according to the position information of the left and right earphones.
Referring to fig. 5C, fig. 5C is a method for focusing a camera according to an embodiment of the present application, where when the position information of the second electronic device is position information of the left and right earphones, a process of the first electronic device focusing the camera based on the position information in the method may include the following steps:
s042a: the first electronic equipment determines the distance between the target face and the plane where the camera is located according to the distance between the left earphone and the plane where the camera is located and the distance between the left earphone and the right earphone and the plane where the camera is located.
In some embodiments, the first electronic device determines that the distance between the target face and the plane where the camera is located is an average value of the distances from the left earphone and the right earphone to the plane of the camera, respectively, and the target face is a face wearing the left earphone and the right earphone.
In other embodiments, the first electronic device may determine that the target face is one of a distance from the left earphone to the camera plane and a distance from the right earphone to the camera plane.
S042b: and the first electronic equipment determines the position of the left earphone on the preview image according to the azimuth information of the left earphone relative to the first electronic equipment. This process can be referred to the related description in S041a when the first electronic device is a headset.
S042c: and the first electronic equipment determines the position of the right earphone on the preview image according to the orientation information of the right earphone relative to the first electronic equipment. This process can be referred to the related description in S041a when the first electronic device is a headset.
S042d: the first electronic equipment determines the face frame by taking the position of the left earphone on the preview image and the position of the right earphone on the preview image as boundaries.
Illustratively, as shown in fig. 11 (a), the positions of the left headphone 1101 and the right headphone 1102 on the preview image obtained by the first electronic device are (x) respectively Left side of ,y Left side of ),(x Right side ,y Right side ) The position (x ', y') of the face frame 1103 can be determined as the midpoint of the left headphone 1101 and the right headphone 1102 on the preview image, and further, the face frame 1 is determined103 has a width w of x Right side -x Left side of And the height h is the height h of the face model. When the position of the face frame on the preview image is determined, the face frame 1104 is displayed on the preview image as shown in fig. 11 (B).
S042e: the first electronic device focuses the camera based on the position of the face frame and the distance of the target face from the plane where the camera is located. For a related description of the above S041c when the first electronic device is a headset, the description of the process may be omitted here.
In some embodiments, the first electronic device may also not determine the face frame, and focus the camera by implementation (two).
Implementation mode (b): the first electronic equipment focuses the camera according to the position information of the second electronic equipment relative to the first electronic equipment.
For example, taking the second electronic device as a bluetooth headset as an example, the bluetooth headset may be one of the two headsets equipped with one bluetooth signal transmitting antenna, or may be two headsets equipped with one bluetooth transmitting antenna, and a manner of focusing the camera according to the position information of one headset and a manner of focusing the camera according to the position information of two headsets are described below respectively.
In some embodiments, when the position information of the second electronic device is the position information of one earphone, focusing the camera may be implemented by the following two embodiments:
embodiment A: the position information of the second electronic device relative to the first electronic device comprises a distance Z between the second electronic device and a plane where a camera of the first electronic device is located, and the first electronic device adjusts the distance between the image sensor and the camera based on the distance Z to complete focusing of the camera.
Embodiment B: the position information of the second electronic device relative to the first electronic device includes a distance of the second electronic device from a plane in which a camera of the first electronic device is located, and orientation information of the second electronic device relative to the first electronic device. And the first electronic equipment corrects the distance from the second electronic equipment to the plane where the camera is located according to the orientation information of the second electronic equipment relative to the first electronic equipment to obtain the corrected distance, and then the first electronic equipment adjusts the distance between the image sensor and the camera based on the corrected distance to finish focusing the camera. The corrected distance may be a distance between the second electronic device and the first electronic device.
It should be understood that the orientation information of the second electronic device with respect to the first electronic device may be the angle of the second electronic device with respect to the first electronic device, and thus the corrected distance, i.e. the distance of the second electronic device from the first electronic device, may be determined by the angle and the distance of the second electronic device from the plane in which the camera is located.
In some specific embodiments, the distance Z between the second electronic device and the first electronic device may be determined according to coordinate information of the second electronic device, for example, taking a geometric center of a camera of the first electronic device as an origin of a coordinate system, the position information of the second electronic device relative to the first electronic device as coordinates (X, Y, Z), and the distance Z between the second electronic device and a plane where the camera of the first electronic device is located, the first electronic device may determine that Z is according to the coordinates:
Figure BDA0002993401230000131
in other embodiments, the second electronic device comprises a left headset and a right headset, the left headset and the right headset each comprising a bluetooth transmitting antenna, and the position information of the second electronic device relative to the first electronic device comprises a distance from the camera plane to the left headset and the right headset, respectively, and an orientation information of the left headset and the right headset relative to the first electronic device, respectively. The first electronic device may focus the camera according to the position information of the left and right earphones.
With position information (X) of the left earpiece of the second electronic device relative to the first electronic device Left side of ,Y Left side of ,Z Left side of ) And position information (X) of the right earphone relative to the first electronic device Right side ,Y Right side ,Z Right side ) For example, the first electronic device focusing the camera may be implemented byC and embodiment D achieve:
embodiment C: the first electronic equipment determines that the distance between the second electronic equipment and the plane where the camera of the first electronic equipment is located is the average value of the distances from the left earphone and the right earphone to the plane of the camera respectively, and then adjusts the distance between the image sensor and the camera based on the average value to finish focusing of the camera. For example, the distance from the left earphone to the plane of the camera is Z Left side of The distance from the left earphone to the plane where the camera is located is Z Right side Then the first electronic device is based on (Z) Left side of +Z Right side ) And/2, focusing the camera.
Embodiment D: referring to fig. 5D, fig. 5D is a method for focusing a camera, in which a process of focusing the camera by a first electronic device based on position information may include the following steps S043a to S043D:
s043a: the first electronic device may determine the location of the second electronic device as a midpoint location of the left earpiece and the right earpiece, where the second electronic device location information includes the locations of the left earpiece and the right earpiece.
S043b: the first electronic device determines orientation information of the midpoint location relative to the first electronic device and a distance of the midpoint location to a plane in which a camera of the first electronic device is located.
S043c: and the first electronic equipment determines the distance between the midpoint position and the first electronic equipment according to the azimuth information of the midpoint position relative to the first electronic equipment and the distance between the midpoint position and the plane of the camera of the first electronic equipment. In some specific embodiments, the process of determining the distance between the midpoint position and the first electronic device may also refer to the description about determining Z in embodiment B, which is not described herein again.
S043d: the first electronic device adjusts the distance between the image sensor and the camera based on the distance between the midpoint position and the first electronic device, and focusing of the camera is completed.
S05 may be performed after the first electronic device completes focusing on the camera.
And S05, shooting by the first electronic equipment based on the focusing result, and determining a preview image displayed by the focused first electronic equipment as a shot image.
By implementing the camera focusing method provided by the embodiment of the application, the user wears the second electronic device (such as the bluetooth headset), the second electronic device sends the bluetooth signal to the first electronic device, and when the user opens the camera program to take a picture of the user, the first electronic device can determine the position information of the second electronic device relative to the first electronic device according to the received bluetooth signal, and then focus the camera according to the position information. In this way, the first electronic device can always determine a stable focusing position and then focus the camera according to the position, no matter the face is partially or completely blocked, the light condition is not good enough, or the face cannot be recognized by the face recognizer. Moreover, in the shooting process, the first electronic device calculates the position of the second electronic device in real time, so that when the first electronic device or the second electronic device moves, the first electronic device can still quickly determine the position of the second electronic device, and then focusing is performed according to the position.
An exemplary electronic device 100 provided by embodiments of the present application is described below.
Fig. 12 shows a schematic structural diagram of an electronic device 100, which may be the first electronic device shown in fig. 1.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna a, an antenna B, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. Wherein, the different processing units may be independent devices or may be integrated in one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bidirectional synchronous serial bus including a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, the processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 through an I2S bus, enabling communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through the I2S interface, so as to implement a function of receiving a call through a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to implement the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive a charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna a, the antenna B, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
Antennas a and B are used to transmit and receive electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: antenna a may be multiplexed as a diversity antenna for a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna a, filter, amplify, etc. the received electromagnetic wave, and transmit the filtered electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna a to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna B, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. Wireless communication module 160 may also receive signals to be transmitted from processor 110, frequency modulate them, amplify them, and convert them into electromagnetic waves via antenna B for radiation.
In some embodiments, antenna a of electronic device 100 is coupled to mobile communication module 150 and antenna B is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), general Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS). In this embodiment, the antenna B may be an antenna array including at least two antennas, and may receive a bluetooth signal transmitted by a bluetooth headset or other bluetooth devices.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, connected to the display screen 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1. In the embodiment of the present application, the camera may also be referred to as a camera, and the plane where the camera is located is the plane where the camera is located.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor, which processes input information quickly by referring to a biological neural network structure, for example, by referring to a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like. In some embodiments of the present application, in a shooting scene, after the electronic device responds to a user operation for "bluetooth positioning," the electronic device determines the position information of the bluetooth headset or other bluetooth devices in real time, and when the electronic device determines the approximate position and size of a face frame on a preview image according to the position information of the bluetooth headset or other bluetooth devices, a face in the preview image may be recognized by a neural-network (NN) computing processor, so as to further determine the size and position of the face frame. In other embodiments, after determining the position of the bluetooth headset or other bluetooth devices in the preview image, the electronic device determines a range on the preview image according to the position, and may further identify a face in the image within the range through the neural network computing processor, and further determine a face frame on the preview image.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a Universal Flash Storage (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into analog audio signals for output, and also used to convert analog audio inputs into digital audio signals. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or sending voice information, the user can input a voice signal to the microphone 170C by uttering a voice signal close to the microphone 170C through the mouth of the user. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and perform directional recording.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, the electronic device 100 may utilize the distance sensor 180F to range to achieve fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to save power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards can be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
Please refer to fig. 13, and fig. 13 are schematic structural diagrams of another camera focusing system according to an embodiment of the present disclosure, where the camera focusing system includes a first electronic device 1300 and a second electronic device 1400.
As shown in fig. 13, the first electronic device 1300 includes a bluetooth positioning system 1301, a focusing system 1302, and an imaging system 1303.
The bluetooth positioning system 1301 is configured to receive a bluetooth signal and determine the received bluetooth signal to determine the position of the second electronic device 1400. The bluetooth positioning system 1301 includes a bluetooth signal receiver 1301A and a bluetooth signal processing module 1301B, where the bluetooth signal receiver 1301A may be a multi-antenna array for receiving bluetooth signals, and the bluetooth signal processing module 1301B is configured to determine the received bluetooth signals to determine the position of the second electronic device 1400. For details of the process of determining the location of the second electronic device 1400, reference may be made to the related descriptions in S01-S03 in fig. 5A, and details are not repeated here.
The focusing system 1302 may include a face recognition module 1302A and a focusing module 1302B for focusing the camera according to the position of the second electronic device 1400 determined by the bluetooth positioning system 1301. The face recognition module 1302A may quickly determine a face frame on the preview image according to the position of the second electronic device 1400, and the focusing module 1302B is configured to perform focusing according to the position of the face frame and the distance from the second electronic device 1400 to the plane where the camera is located. The focusing process can be referred to the related descriptions of S041a to S041C in FIG. 5B, S042a to S042e in FIG. 5C, or S043a to S043D in FIG. 5D, and the description thereof is omitted here.
The imaging system 1303 is configured to perform shooting according to the focusing result, and determine that the preview image displayed by the focused first electronic device is a shot image.
As shown in fig. 13, the second electronic device 1400 may include a bluetooth positioning system 1401 corresponding to the first electronic device 1300, the bluetooth module 1401 includes a bluetooth signal transmitter 1401A, and the bluetooth signal transmitter 1401A may be a single antenna for transmitting bluetooth signals.
As used in the above embodiments, the term "when …" may be interpreted to mean "if …" or "after …" or "in response to determination …" or "in response to detection of …", depending on the context. Similarly, the phrase "in determining …" or "if a (stated condition or event) is detected" may be interpreted to mean "if … is determined" or "in response to …" or "upon detection of (stated condition or event)" or "in response to detection of (stated condition or event)" depending on the context.
It is to be understood that one of ordinary skill in the art would recognize that the elements and algorithm steps of the various examples described in connection with the embodiments disclosed in the various embodiments disclosed herein can be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Those of skill would appreciate that the functions described in connection with the various illustrative logical blocks, modules, and algorithm steps disclosed in the various embodiments disclosed herein may be implemented as hardware, software, firmware, or any combination thereof. If implemented in software, the functions described in the various illustrative logical blocks, modules, and steps may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. The computer-readable medium may include a computer-readable storage medium, which corresponds to a tangible medium, such as a data storage medium, or any communication medium including a medium that facilitates transfer of a computer program from one place to another (e.g., according to a communication protocol). In this manner, a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium, such as a signal or carrier wave. A data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described herein. The computer program product may include a computer-readable medium.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (21)

1. A method for focusing a camera is applied to a first electronic device, wherein the first electronic device comprises the camera and at least 2 antennas, and the method comprises the following steps:
receiving Bluetooth signals sent by second electronic equipment through the at least 2 antennas;
under the condition that the camera is turned on, in response to user operation input aiming at a Bluetooth positioning control, determining the position information of the second electronic equipment relative to the first electronic equipment according to the received Bluetooth signals of the at least 2 antennas;
focusing the camera based on the position information.
2. The method of claim 1, wherein the location information comprises a distance of the second electronic device from a plane in which the camera is located and orientation information of the second electronic device relative to the first electronic device; the focusing the camera based on the position information includes:
correcting the distance according to the azimuth information of the second electronic equipment relative to the first electronic equipment;
and focusing the camera through the corrected distance.
3. The method of claim 2, wherein the correcting the distance according to the orientation information of the second electronic device relative to the first electronic device comprises:
and determining the distance between the second electronic equipment and the first electronic equipment as the corrected distance according to the distance and the azimuth information of the second electronic equipment relative to the first electronic equipment.
4. The method of claim 1, wherein the location information includes a distance of the second electronic device from a plane in which the camera is located and orientation information of the second electronic device relative to the first electronic device, the method further comprising: displaying a preview image in real time under the condition that the camera is turned on;
the focusing the camera based on the position information includes:
determining the position of the second electronic equipment on the preview image according to the orientation information of the second electronic equipment relative to the first electronic equipment;
determining a face frame on the preview image according to the position of the second electronic equipment on the preview image;
focusing the camera based on the position of the face frame and the distance between the target face and the plane where the camera is located, wherein the position of the face frame is the position of the geometric center of the face frame on the preview image, the target face is the face in the face frame in the preview image, and the distance between the target face and the plane where the camera is located is the distance between the second electronic device and the plane where the camera is located.
5. The method of claim 4, further comprising: and displaying the face frame on the face position of the preview image.
6. The method according to claim 4 or 5, wherein the determining a face frame on the preview image according to the position of the second electronic device on the preview image comprises:
determining the size of the face frame according to the distance between the target face and the plane where the camera is located and the size of the face model, wherein the size of the face frame is inversely proportional to the distance between the target face and the plane where the camera is located;
and determining the position of the face frame on the preview image according to the position of the second electronic equipment on the preview image and the size of the face frame.
7. The method of claim 6, wherein after determining the face box on the preview image, the method further comprises:
inputting the image in the face frame into a face recognizer, and outputting a recognition result as a face when the face recognizer recognizes the face features;
and adjusting the size of the face frame to be the size of the face, wherein the position of the face frame is the position of the face.
8. The method of claim 1, wherein the second electronic device is a pair of bluetooth headsets, the pair of bluetooth headsets including a left headset and a right headset, the location information of the second electronic device relative to the first electronic device including a distance of the left headset and the right headset, respectively, to the camera plane, and orientation information of the left headset and the right headset, respectively, relative to the first electronic device;
the focusing the camera based on the position information includes:
determining the distance between a target face and the plane where the camera is located as the average value of the distances between the left earphone and the right earphone and the plane where the camera is located, wherein the target face is a face wearing the left earphone and the right earphone;
determining the position of the left earphone on the preview image according to the azimuth information of the left earphone relative to the first electronic equipment;
determining the position of the right earphone on the preview image according to the azimuth information of the right earphone relative to the first electronic equipment;
determining a face frame by taking the position of the left earphone on the preview image and the position of the right earphone on the preview image as boundaries;
focusing the camera based on the position of the face frame and the distance between the target face and the plane where the camera is located.
9. The method according to any one of claims 4-8, wherein focusing the camera based on the position of the face frame and the distance of the target face from the plane where the camera is located comprises:
determining the distance between the camera and the target face according to the position of the face frame and the distance between the target face and the plane where the camera is located;
focusing the camera according to the distance between the camera and the target face.
10. The method of any of claims 1-9, wherein determining the location information of the second electronic device relative to the first electronic device from the received bluetooth signals of the at least 2 antennas comprises:
determining an arrival angle of the Bluetooth signal to each antenna according to the wavelength of the Bluetooth signal, the phase difference of the Bluetooth signal received by each 2 antennas of the at least 2 antennas and the position of each antenna on the first electronic device;
and determining the position information of the second electronic equipment relative to the first electronic equipment according to the arrival angle of the Bluetooth signal to each antenna and the position of each antenna on the first electronic equipment.
11. An electronic device, comprising: one or more processors, one or more memories, a camera and at least 2 antennas, the camera, the one or more memories being respectively coupled with the one or more processors;
the camera is used for acquiring images;
the at least 2 antennas are used for receiving Bluetooth signals;
the one or more memories are for storing computer program code comprising computer instructions;
the processor is configured to invoke the computer instructions to perform the following operations:
in response to a user operation input for a Bluetooth positioning control, determining position information of the second electronic device relative to the first electronic device according to the received Bluetooth signals of the at least 2 antennas;
focusing the camera based on the position information.
12. The electronic device of claim 11, wherein the location information comprises a distance of the second electronic device from a plane in which the camera is located and orientation information of the second electronic device relative to the first electronic device; the processor performs the focusing of the camera based on the position information, including performing:
correcting the distance according to the azimuth information of the second electronic equipment relative to the first electronic equipment;
and focusing the camera through the corrected distance.
13. The electronic device of claim 12, wherein the processor performs correcting the distance according to orientation information of the second electronic device relative to the first electronic device comprises performing:
and determining the distance between the second electronic equipment and the first electronic equipment as the corrected distance according to the distance and the azimuth information of the second electronic equipment relative to the first electronic equipment.
14. The electronic device of claim 11, wherein the location information comprises a distance of the second electronic device from a plane in which the camera is located and orientation information of the second electronic device relative to the first electronic device, and wherein the processor further performs: displaying a preview image in real time under the condition that the camera is started;
the processor performs focusing the camera based on the position information, including performing:
determining the position of the second electronic equipment on the preview image according to the orientation information of the second electronic equipment relative to the first electronic equipment;
determining a face frame on the preview image according to the position of the second electronic equipment on the preview image;
focusing the camera based on the position of the face frame and the distance between the target face and the plane where the camera is located, wherein the position of the face frame is the position of the geometric center of the face frame on the preview image, the target face is the face in the face frame in the preview image, and the distance between the target face and the plane where the camera is located is the distance between the second electronic device and the plane where the camera is located.
15. The electronic device of claim 14, wherein the processing further comprises performing: and displaying the face frame on the face position of the preview image.
16. The electronic device of claim 14 or 15, wherein the processor performs determining a face box on the preview image according to the position of the second electronic device on the preview image, including performing:
determining the size of the face frame according to the distance between the target face and the plane where the camera is located and the size of the face model, wherein the size of the face frame is inversely proportional to the distance between the target face and the plane where the camera is located;
and determining the position of the face frame on the preview image according to the position of the second electronic equipment on the preview image and the size of the face frame.
17. The electronic device of claim 16, wherein after determining the face box on the preview image, the processor further performs:
inputting the image in the face frame into a face recognizer, and outputting a recognition result as a face when the face recognizer recognizes the face features;
and adjusting the size of the face frame to be the size of the face, wherein the position of the face frame is the position of the face.
18. The electronic device of claim 11, wherein the second electronic device is a pair of bluetooth headsets including a left headset and a right headset, wherein the position information of the second electronic device relative to the first electronic device includes distances of the left headset and the right headset, respectively, to the camera plane, and orientation information of the left headset and the right headset, respectively, relative to the first electronic device;
the processor performs focusing the camera based on the position information, including performing:
determining the distance between a target face and the plane where the camera is located as the average value of the distances between the left earphone and the right earphone and the plane where the camera is located, wherein the target face is a face wearing the left earphone and the right earphone;
determining the position of the left earphone on the preview image according to the azimuth information of the left earphone relative to the first electronic equipment;
determining the position of the right earphone on the preview image according to the azimuth information of the right earphone relative to the first electronic equipment;
determining a face frame by taking the position of the left earphone on the preview image and the position of the right earphone on the preview image as boundaries;
focusing the camera based on the position of the face frame and the distance between the target face and the plane where the camera is located.
19. The electronic device of any of claims 14-18, wherein the processor performs focusing the camera based on the position of the face frame and the distance of the target face from the plane in which the camera is located, comprising performing:
determining the distance between the camera and the target face according to the position of the face frame and the distance between the target face and the plane where the camera is located;
focusing the camera according to the distance between the camera and the target face.
20. The electronic device of any one of claims 11-19, wherein the processor performs determining the location information of the second electronic device relative to the first electronic device from the received bluetooth signals of the at least 2 antennas comprises performing:
determining an arrival angle of the Bluetooth signal to each antenna according to the wavelength of the Bluetooth signal, the phase difference of the Bluetooth signal received by each 2 antennas of the at least 2 antennas and the position of each antenna on the first electronic device;
and determining the position information of the second electronic equipment relative to the first electronic equipment according to the arrival angle of the Bluetooth signal to each antenna and the position of each antenna on the first electronic equipment.
21. A computer storage medium comprising computer instructions that, when run on an electronic device, cause the electronic device to perform the method of camera focusing according to any one of claims 1-10.
CN202110322579.3A 2021-03-25 2021-03-25 Camera focusing method and electronic equipment Pending CN115209027A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110322579.3A CN115209027A (en) 2021-03-25 2021-03-25 Camera focusing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110322579.3A CN115209027A (en) 2021-03-25 2021-03-25 Camera focusing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN115209027A true CN115209027A (en) 2022-10-18

Family

ID=83571410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110322579.3A Pending CN115209027A (en) 2021-03-25 2021-03-25 Camera focusing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN115209027A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811612A (en) * 2015-04-10 2015-07-29 深圳市金立通信设备有限公司 Terminal
CN104954677A (en) * 2015-06-12 2015-09-30 联想(北京)有限公司 Camera focusing determining method and electronic equipment
CN105611167A (en) * 2015-12-30 2016-05-25 联想(北京)有限公司 Focusing plane adjusting method and electronic device
CN106303256A (en) * 2016-08-31 2017-01-04 广东小天才科技有限公司 A kind of determination method and device of acquisition parameters
CN110868536A (en) * 2019-11-05 2020-03-06 珠海格力电器股份有限公司 Access control system control method and access control system
CN111432331A (en) * 2020-03-30 2020-07-17 华为技术有限公司 Wireless connection method, device and terminal equipment
CN111901524A (en) * 2020-07-22 2020-11-06 维沃移动通信有限公司 Focusing method and device and electronic equipment
CN112087649A (en) * 2020-08-05 2020-12-15 华为技术有限公司 Equipment searching method and electronic equipment
CN112511743A (en) * 2020-11-25 2021-03-16 南京维沃软件技术有限公司 Video shooting method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811612A (en) * 2015-04-10 2015-07-29 深圳市金立通信设备有限公司 Terminal
CN104954677A (en) * 2015-06-12 2015-09-30 联想(北京)有限公司 Camera focusing determining method and electronic equipment
CN105611167A (en) * 2015-12-30 2016-05-25 联想(北京)有限公司 Focusing plane adjusting method and electronic device
CN106303256A (en) * 2016-08-31 2017-01-04 广东小天才科技有限公司 A kind of determination method and device of acquisition parameters
CN110868536A (en) * 2019-11-05 2020-03-06 珠海格力电器股份有限公司 Access control system control method and access control system
CN111432331A (en) * 2020-03-30 2020-07-17 华为技术有限公司 Wireless connection method, device and terminal equipment
CN111901524A (en) * 2020-07-22 2020-11-06 维沃移动通信有限公司 Focusing method and device and electronic equipment
CN112087649A (en) * 2020-08-05 2020-12-15 华为技术有限公司 Equipment searching method and electronic equipment
CN112511743A (en) * 2020-11-25 2021-03-16 南京维沃软件技术有限公司 Video shooting method and device

Similar Documents

Publication Publication Date Title
CN110445978B (en) Shooting method and equipment
CN113810601B (en) Terminal image processing method and device and terminal equipment
CN110750772A (en) Electronic equipment and sensor control method
CN110458902B (en) 3D illumination estimation method and electronic equipment
CN112686981A (en) Picture rendering method and device, electronic equipment and storage medium
CN110649719A (en) Wireless charging method and electronic equipment
CN113393856B (en) Pickup method and device and electronic equipment
CN113496708A (en) Sound pickup method and device and electronic equipment
CN114610193A (en) Content sharing method, electronic device, and storage medium
CN113572956A (en) Focusing method and related equipment
CN114257920B (en) Audio playing method and system and electronic equipment
CN114863494A (en) Screen brightness adjusting method and device and terminal equipment
WO2022257563A1 (en) Volume adjustment method, and electronic device and system
CN113572957B (en) Shooting focusing method and related equipment
CN114880251A (en) Access method and access device of storage unit and terminal equipment
CN114500901A (en) Double-scene video recording method and device and electronic equipment
CN113518189B (en) Shooting method, shooting system, electronic equipment and storage medium
CN113781548B (en) Multi-equipment pose measurement method, electronic equipment and system
CN113496477A (en) Screen detection method and electronic equipment
CN114302063B (en) Shooting method and equipment
CN113436635A (en) Self-calibration method and device of distributed microphone array and electronic equipment
CN115714890A (en) Power supply circuit and electronic device
CN115706869A (en) Terminal image processing method and device and terminal equipment
CN115393676A (en) Gesture control optimization method and device, terminal and storage medium
CN115209027A (en) Camera focusing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination