WO2022267464A1 - Focusing method and related device - Google Patents
Focusing method and related device Download PDFInfo
- Publication number
- WO2022267464A1 WO2022267464A1 PCT/CN2022/072712 CN2022072712W WO2022267464A1 WO 2022267464 A1 WO2022267464 A1 WO 2022267464A1 CN 2022072712 W CN2022072712 W CN 2022072712W WO 2022267464 A1 WO2022267464 A1 WO 2022267464A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- electronic device
- attention point
- camera
- point
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 106
- 230000004044 response Effects 0.000 claims abstract description 22
- 230000006870 function Effects 0.000 claims description 41
- 239000013598 vector Substances 0.000 claims description 38
- 210000001508 eye Anatomy 0.000 claims description 31
- 210000005252 bulbus oculi Anatomy 0.000 claims description 29
- 230000003287 optical effect Effects 0.000 claims description 26
- 230000015654 memory Effects 0.000 claims description 24
- 230000009471 action Effects 0.000 claims description 19
- 210000003128 head Anatomy 0.000 claims description 13
- 230000000007 visual effect Effects 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 10
- 238000013519 translation Methods 0.000 claims description 8
- 230000010344 pupil dilation Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 description 51
- 238000004891 communication Methods 0.000 description 32
- 230000006854 communication Effects 0.000 description 32
- 238000010586 diagram Methods 0.000 description 16
- 238000007726 management method Methods 0.000 description 15
- 210000001747 pupil Anatomy 0.000 description 15
- 238000012545 processing Methods 0.000 description 13
- 230000005236 sound signal Effects 0.000 description 13
- 230000008859 change Effects 0.000 description 12
- 238000006243 chemical reaction Methods 0.000 description 11
- 238000010295 mobile communication Methods 0.000 description 11
- 210000000988 bone and bone Anatomy 0.000 description 10
- 230000014509 gene expression Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 230000004438 eyesight Effects 0.000 description 7
- 238000006073 displacement reaction Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 229920001621 AMOLED Polymers 0.000 description 5
- 210000004087 cornea Anatomy 0.000 description 5
- 238000013473 artificial intelligence Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 241001465754 Metazoa Species 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 3
- 210000000887 face Anatomy 0.000 description 3
- 239000010985 leather Substances 0.000 description 3
- 230000036544 posture Effects 0.000 description 3
- 210000001525 retina Anatomy 0.000 description 3
- 238000010009 beating Methods 0.000 description 2
- 230000036772 blood pressure Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 210000003205 muscle Anatomy 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 206010047513 Vision blurred Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 230000003862 health status Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000012528 membrane Substances 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 210000005070 sphincter Anatomy 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000011269 treatment regimen Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
Definitions
- the present application relates to the field of terminal technologies, and in particular to a focusing method and related equipment.
- Smart terminal devices such as mobile phones, tablet computers, etc.
- the device can drive the voice coil motor to adjust based on the position of the focus point selected by the user
- the position of the lens is to change the distance between the lens and the image sensor, so that the focal plane falls on the image sensor to achieve focusing, so as to capture a clear image of the focus area.
- the embodiment of the present application discloses a focusing method and related equipment, which can automatically focus according to the user's attention point, reduce user operations, and improve user experience.
- the embodiment of the present application discloses a focusing method, including: in response to the user's first operation, the electronic device starts shooting, displays a first interface, and displays a preview image captured by the camera on the first interface ;
- the electronic device displays a first preview picture on the first interface, the first preview picture is a preview picture collected by the camera to meet the preset condition and the attention point is the focus point, and the attention point is the user's The line of sight falls on the position point of the first interface.
- the first operation may be an operation in which the user taps the camera application on the screen to start shooting.
- the electronic device when the user holds the electronic device to take pictures, the electronic device can also automatically focus based on the acquired position of the user's attention point in the preview screen, which can accurately focus on the user's attention point and is easy to operate , which can improve user experience.
- the electronic device displays a first preview image on the first interface, which specifically includes: the electronic device acquires a target image through a front camera, and the target image includes the image of the user's eyes. image; the electronic device determines the user's attention point based on the target image; when the user's attention point satisfies the preset condition, adjusting the focal length of the camera with the user's attention point as the focus point, collecting the first preview image through the camera; and displaying the first preview image on the first interface.
- the electronic device obtains the user's target image in real time through the front camera and determines the position of the attention point, providing a stable focus point input source, and the electronic device can continuously focus, thereby continuously obtaining clear images of the user's interest area.
- user experience can be improved.
- the method further includes: the electronic device detects a second operation for enabling the focus function; in response to the second operation, the electronic device displays on the first interface The first preview screen.
- the second operation may be a trigger operation to enable the attention focus function, or a voice command to enable the attention focus function.
- a trigger operation to enable the attention focus function
- a voice command to enable the attention focus function.
- the electronic device determining the user's attention point based on the target image specifically includes: when the user's attention point is detected based on the target image, the electronic device determines that the attention point is detected duration; when the user’s attention point satisfies the preset condition, using the user’s attention point as the focus point, adjust the focal length of the camera, and capture the first preview image through the camera , specifically comprising: when the duration of the attention point is not less than a first duration threshold, using the user's attention point as a focus point, adjusting the focal length of the camera, and capturing the first preview image through the camera.
- the attention point can be focused, thereby improving the stability of the preview image.
- the duration of detecting the attention point may be the duration of the user's fixation on a certain attention point.
- the electronic device determining the user's attention point based on the target image specifically includes: when the user's attention point is detected based on the target image, the electronic device determines that the attention point is detected duration, and the intermittent time when no attention point is detected; when the user’s attention point satisfies the preset condition, the focus point of the user’s attention point is used as the focus point to adjust the focal length of the camera, by
- the camera capturing the first preview image specifically includes: when the duration of the detected attention point is not less than the second duration threshold, and the intermittent time for which the attention point is not detected is less than the third duration threshold, using The focus point of the user is the focus point, the focal length of the camera is adjusted, and the first preview image is collected by the camera.
- the focus point of the preview image can remain unchanged, thereby improving the stability of the image and improving user experience.
- the duration of the detected attention point may be the duration of the user's gazing at the first attention point
- the intermittent time when the attention point is not detected may be the duration of the user's sight leaving the first attention point.
- the electronic device determining the user's attention point based on the target image specifically includes: the electronic device determining the user's attention point based on the target image, and detecting a user action; When the user's attention point satisfies the preset condition, adjusting the focal length of the camera with the user's attention point as the focus point, and collecting the first preview image through the camera, specifically includes: the detecting When the user's action in the target image is a setting action, take the user's current attention point as the focus point, adjust the focal length of the camera, and collect the first preview image through the camera, and the setting action includes pupil One or more of zoom in, "nod" and "OK" gestures.
- the electronic device can perceive the user's actions, indicating that the user is paying close attention to the current attention object point, and the obtained image is more in line with the user's intention, and does not require manual or other operations by the user, which improves the user experience.
- the target image may include the user's eyes, may also include the user's face, and may also include the user's body parts.
- the electronic device determining the user's attention point based on the target image specifically includes: at a first moment, the electronic device determines based on the target image that the user is in the first The first duration of the attention point; when the user's attention point satisfies the preset condition, the user's attention point is used as the focus point to adjust the focal length of the camera, and the camera captures the
- the first preview image specifically includes: when it is detected that the first duration is greater than the fourth duration threshold, using the first attention point as the focus point, adjusting the focal length of the camera, and collecting the first image through the camera A preview screen; the method further includes: at a second moment, the electronic device determines, based on the target image, a second duration for detecting that the user is at a second point of attention, and the first point of attention and the second point of attention are The point of attention is a point of attention at a different position, and the second moment is after the first moment; when it is detected that the second duration is greater than the fourth duration threshold,
- the focal length of the camera is used to collect the second preview image through the camera. In this way, when the user's attention is shifted from one position to its adjacent position, each focusing position only needs to adjust a small motor distance each time, achieving smooth focusing, and saving money while improving the imaging quality of the image. reduce the resource consumption of electronic devices.
- the first duration may be the duration of the user's gaze on the first attention point
- the second duration may be the duration of the user's gaze on the second attention point.
- the electronic device determines the user's attention point based on the target image, which specifically includes: the electronic device determines the position of the user's attention point based on the target image; When the user's attention point satisfies the preset condition, adjusting the focal length of the camera with the user's attention point as the focus point, and collecting the first preview image through the camera, specifically includes: When the position of the attention point is at the intersection of multiple objects in the target picture, the electronic device matches the objects in the target picture based on the user voice information; when the electronic device matches the first focus object , using the first focusing object as the focusing point, adjusting the focal length of the camera, and collecting the first preview image through the camera, the first focusing object being one of the multiple objects in the target image scenery or a person. In this way, the object that the user pays attention to can be determined more accurately, and the object can be focused on, thereby improving the accuracy of focusing.
- the electronic device determines the user's attention point based on the target image, which specifically includes: the position of the user's attention point determined by the electronic device based on the target image, and the The location of the attention point and the type of object; when the user's attention point satisfies the preset condition, take the user's attention point as the focus point, adjust the focal length of the camera, and collect the first
- the preview image specifically includes: when the attention point is at the intersection of multiple types of objects in the target image, adjusting the focal length of the camera with the focus object of the same type as the previous focus object as the focus point, and using the The camera captures the first preview image. In this way, the object that the user pays attention to can be determined more accurately, and the object can be focused on, thereby improving the accuracy of focusing.
- the first preview image includes a focus frame, and the attention point is a center position of the focus frame. In this way, the accuracy of the focus position in the preview image can be guaranteed.
- the first preview image includes a focus frame, and the center of the subject of the attention point is the center of the focus frame. In this way, the accuracy of the focus position in the preview image can be guaranteed.
- the electronic device determining the user's attention point based on the target image includes:
- ⁇ is the modulus of the corneal center Oc and the attention point Pg, expressed as:
- the eyeball structural parameters of the first person include: eyeball center O e , Kappa angle is the angle between the actual receiving angle and the optical axis, the horizontal component ⁇ of the Kappa angle, the vertical component ⁇ of the Kappa angle, R is the rotation parameter, t is the translation parameter, Be the coordinates of the head coordinate system, V s is the unit normal vector of the plane where the display screen of the electronic device is located;
- the deflection angle of the optical axis unit vector Vo is Vo is the unit vector of the optical axis, expressed as:
- Vg is the visual axis unit vector, expressed as:
- the embodiment of the present application discloses an electronic device, including: a processor, a camera, and a touch screen.
- the processor is configured to, in response to a first user operation, instruct the camera to start shooting, instruct the touch screen to display a first interface, and display a preview image collected by the camera on the first interface;
- the processor is further configured to instruct the touch screen to display a first preview picture on the first interface, and the first preview picture is a preview captured by the camera meeting the preset condition of the attention point as the focus point screen, the point of attention is the point where the user's line of sight falls on the first interface.
- the first operation may be an operation in which the user taps the camera application on the screen to start shooting.
- the electronic device when the user holds the electronic device to take pictures, the electronic device can also automatically focus based on the acquired position of the user's attention point in the preview screen, which can accurately focus on the user's attention point and is easy to operate , which can improve user experience.
- the processor instructs the touch screen to display a first interface to display a first preview image, specifically including: the processor, configured to acquire an image of a target through a front camera, and the target The image includes an image of the user's eyes; the processor is further configured to determine the user's attention point based on the target image; when the user's attention point satisfies the preset condition, the user's attention point For focusing, adjust the focal length of the camera, collect the first preview picture through the camera; display the first preview picture on the first interface.
- the electronic device obtains the user's target image in real time through the front camera and determines the position of the attention point, providing a stable focus point input source, and the electronic device can continuously focus, thereby continuously obtaining clear images of the user's interest area.
- user experience can be improved.
- the processor is further configured to detect a second operation for enabling the point of attention focusing function; the processor is further configured to, in response to the second operation, indicate that the touch The screen displays the first preview image on the first interface.
- the second operation may be a trigger operation to enable the attention focus function, or a voice command to enable the attention focus function.
- a trigger operation to enable the attention focus function
- a voice command to enable the attention focus function.
- the processor determining the user's attention point based on the target image specifically includes: when the user's attention point is detected based on the target image, the electronic device determines that the attention point is detected duration; when the user’s attention point satisfies the preset condition, using the user’s attention point as the focus point, adjust the focal length of the camera, and capture the first preview image through the camera , specifically comprising: when the duration of the attention point is not less than a first duration threshold, using the user's attention point as a focus point, adjusting the focal length of the camera, and capturing the first preview image through the camera.
- the attention point can be focused, thereby improving the stability of the preview image.
- the duration of detecting the attention point may be the duration of the user's fixation on a certain attention point.
- the processor determining the user's attention point based on the target image specifically includes: when the user's attention point is detected based on the target image, determining the duration of the detected attention point, And the intermittent time when the attention point cannot be detected; when the user’s attention point satisfies the preset condition, the focus point of the user’s attention point is used as the focus point to adjust the focal length of the camera, and the camera collects
- the first preview screen specifically includes: when the duration of the detected attention point is not less than the second duration threshold, and the intermittent time of the non-detection of the attention point is less than the third duration threshold, the user's The attention point is the focusing point, the focal length of the camera is adjusted, and the first preview image is captured by the camera. In this way, when the user's line of sight is unstable, for example, when the user glances at other places and then looks back, the focus point of the preview image can remain unchanged, thereby improving the stability of the image and improving user experience.
- the duration of the detected attention point may be the user's gaze duration for the first attention point
- the intermittent time for the non-detected attention point may be the duration of the user's line of sight leaving the first attention point.
- the processor determining the user's attention point based on the target image specifically includes: determining the user's attention point based on the target image, and detecting a user action; When the point of attention satisfies the preset condition, adjusting the focal length of the camera with the point of attention of the user as the focus point, and collecting the first preview image through the camera, specifically includes: the detected target image
- the user's action is a setting action
- the focus of the camera is adjusted with the user's current attention point as the focus point, and the first preview image is collected through the camera.
- the setting action includes pupil dilation, "nodding” ” and “OK” gestures.
- the electronic device can sense the user's action, indicating that the user is paying close attention to the current attention point object, and the obtained image is more in line with the user's intention, and the user's manual or other operations are not required, which improves the user experience.
- the target image may include the user's eyes, may also include the user's face, and may also include the user's body parts.
- the determining, by the processor, the user's attention point based on the target image specifically includes: at a first moment, determining based on the target image that the user is detected to be at the first attention point a duration; when the user's attention point satisfies the preset condition, using the user's attention point as the focus point, adjusting the focal length of the camera, and capturing the first preview image through the camera , specifically comprising: when it is detected that the first duration is greater than a fourth duration threshold, using the first attention point as a focus point, adjusting the focal length of the camera, and capturing the first preview image through the camera;
- the processor is further configured to determine, based on the target image, a second duration during which the user is detected at a second attention point at a second moment, and the first attention point and the second attention point are at different positions.
- the second moment is after the first moment; when it is detected that the second duration is greater than the fourth duration threshold, the second attention point is used as the focus point, and the focal length of the camera is adjusted again, by The camera captures the second preview image.
- each focusing position only needs to adjust a small motor distance each time, achieving smooth focusing, and saving money while improving the imaging quality of the image. reduce the resource consumption of electronic devices.
- the first duration may be the duration of the user's gaze on the first attention point
- the second duration may be the duration of the user's gaze on the second attention point.
- the processor determining the user's attention point based on the target image specifically includes: determining the position of the user's attention point based on the target image; When the preset condition is met, adjust the focal length of the camera with the user's attention point as the focus point, and collect the first preview image through the camera, specifically including: at the position of the user's attention point In the case of the intersection position of multiple objects in the target picture, the processor matches the objects in the target picture based on the user voice information; when the processor matches the first focus object, the The first focus object is the focus point, adjust the focal length of the camera, and collect the first preview picture through the camera, and the first focus object is a scene or a person among the multiple objects in the target picture . In this way, the object that the user pays attention to can be determined more accurately, and the object can be focused on, thereby improving the accuracy of focusing.
- the processor determines the user's attention point on the target image, specifically including: the position of the user's attention point determined based on the target image, and the position of the attention point The type of object; when the user's attention point satisfies the preset condition, using the user's attention point as the focus point, adjusting the focal length of the camera, and collecting the first preview image through the camera, specifically Including: when the attention point is at the intersection of multiple types of objects in the target picture, using the focus object of the same type as the focus object last time as the focus point, adjusting the focal length of the camera, and collecting the information collected by the camera to the first preview screen. In this way, the object that the user pays attention to can be determined more accurately, and the object can be focused on, thereby improving the accuracy of focusing.
- the first preview image includes a focus frame, and the attention point is a center position of the focus frame. In this way, the accuracy of the focus position in the preview image can be guaranteed.
- the first preview image includes a focus frame, and the center of the subject of the attention point is the center of the focus frame. In this way, the accuracy of the focus position in the preview image can be guaranteed.
- the electronic device determining the user's attention point based on the target image includes:
- ⁇ is the modulus of the corneal center O c and the attention point P g , expressed as:
- the eyeball structural parameters of the first person include: eyeball center O e , Kappa angle is the angle between the actual receiving angle and the optical axis, the horizontal component ⁇ of the Kappa angle, the vertical component ⁇ of the Kappa angle, R is the rotation parameter, t is the translation parameter, Be the coordinates of the head coordinate system, V s is the unit normal vector of the plane where the display screen of the electronic device is located;
- V o is the unit vector of the optical axis, expressed as:
- V g is the visual axis unit vector, expressed as:
- the present application provides an electronic device, including a touch screen, a camera, one or more processors, and one or more memories.
- the one or more processors are coupled with a touch screen, a camera, and one or more memories, and the one or more memories are used to store computer program codes.
- the computer program codes include computer instructions.
- the present application provides an electronic device, including: one or more functional modules.
- One or more functional modules are used to execute the focusing method in any possible implementation manner of any of the above aspects.
- the embodiment of the present application provides a computer storage medium, including computer instructions, and when the computer instructions are run on the electronic device, the above-mentioned apparatus executes the focusing method in any possible implementation of any one of the above-mentioned aspects. .
- an embodiment of the present application provides a computer program product, which, when the computer program product is run on a computer, causes the computer to execute the focusing method in any possible implementation manner of any one of the above aspects.
- FIG. 1A is a schematic cross-sectional view of an eyeball structure model provided in an embodiment of the present application
- FIG. 1B is a schematic diagram of an application scenario of a focusing method provided by an embodiment of the present application
- FIG. 9 is a schematic flowchart of a focusing method provided by an embodiment of the present application.
- Fig. 10 is a schematic diagram of a 3D eye model provided by an embodiment of the present application.
- Fig. 11 is a schematic diagram of conversion from a head coordinate system to a world coordinate system provided by an embodiment of the present application.
- Fig. 12 is a schematic diagram of the relationship between the visual axis and the optical axis provided by the embodiment of the present application;
- FIGS. 13A-13C are schematic diagrams of some user interfaces provided by the embodiments of the present application.
- FIG. 14 is a schematic structural diagram of an electronic device 100 provided by an embodiment of the present application.
- Focusing the process of changing the distance between the lens and the imaging surface (image sensor) through the camera focusing mechanism to make the subject image clear is focusing.
- Auto focus of mobile phone is to use the principle of object light reflection, the reflected light is received by the image sensor (CCD or CMOS) on the camera in the mobile phone, and the original image is obtained, and the original image is calculated and processed to drive
- the way the electric focusing device focuses is called autofocus.
- it is a set of data calculation methods integrated in the mobile phone ISP (Image Signal Processor).
- the viewfinder captures the most original image, the image data will be sent to the ISP as the original data.
- the ISP will analyze the image data, get the distance of the lens that needs to be adjusted, and then drive the voice coil motor to adjust , making the image clear - this process is reflected in the eyes of mobile phone users, which is the autofocus process.
- the lens is locked in the voice coil motor, and the position of the lens can be changed by driving the voice coil motor.
- phase focus there are three ways to realize the auto focus of the mobile phone: phase focus, contrast focus and laser focus.
- the following three ways of auto focus are introduced respectively.
- Phase focusing is to reserve some shaded pixels on the photosensitive element, which are specially used for phase detection, and determine the offset of the lens relative to the focal plane through the distance between pixels and its changes, so that according to the The offset adjusts the lens position to achieve focus.
- the principle of phase focusing is to set phase difference detection pixels on the photosensitive element (such as an image sensor).
- the phase difference detection pixels cover the left half or the right half of the pixels, which can detect the amount of light and other information on objects in the scene.
- the phase difference is the phase difference between the optical signals received by the pixels on the left and right sides.
- the electronic device calculates the correlation value through the images obtained on the left and right sides of the phase difference detection pixel, and obtains a focusing function, so that the phase difference and offset are equal to One-to-one relationship.
- Contrast focusing it is assumed that after the focus is successful, the contrast of adjacent pixels is the largest. Based on this assumption, a focus point is determined during the focusing process, and the contrast between the focus point and the adjacent pixels is judged, and the sound is moved repeatedly. After turning the motor, a local gradient maximum is obtained, and the focus is completed.
- the electronic device emits an infrared laser to the subject to be photographed (focusing subject) through an infrared laser sensor. When the laser reaches the focusing subject, it will return to the original path.
- the electronic device calculates the distance from the electronic device to the focus according to the round-trip time of the infrared laser. The distance of the main body, further, the electronic device drives the voice coil motor to adjust the position of the lens based on the distance.
- the voice coil motor is mainly composed of a coil, a magnet group and shrapnel.
- the coil is fixed in the magnet group by the upper and lower shrapnel.
- the coil When the coil is energized, the coil will generate a magnetic field.
- the coil magnetic field interacts with the magnet group, and the coil will Moving upward, the lens locked in the coil moves together.
- the power is cut off, the coil returns under the elastic force of the shrapnel, thus realizing the autofocus function.
- Figure 1A is a schematic cross-sectional view of an eyeball structure model provided in the embodiment of the present application.
- the eyeball includes a cornea 1, an iris 2, a pupil 3, a lens 4, a retina 5, a cornea center 6, and an eyeball center 7, wherein :
- the cornea 1 is the transparent part of the front of the eyeball, and is the first pass through which light enters the eyeball.
- the center of the outer surface of the cornea 1 is about 3mm in a spherical arc, called the optical zone, and the radius of curvature around it gradually increases, showing an aspherical shape.
- the cornea 1 is assumed to be a spherical arc surface.
- the iris 2 is a disc-shaped membrane with a hole called the pupil 3 in the center. If the light is too strong, the sphincter muscle in the iris 2 contracts, and the pupil 3 shrinks; when the light becomes weak, the dilator muscle of the iris 2 contracts, and the pupil 3 becomes larger.
- the pupil 3 is the small circular hole in the center of the iris in the animal or human eye, which is the passage for light to enter the eye.
- the retina 5 is the photosensitive part of the eyeball, and external objects are imaged on the retina 5 .
- FIG. 1B shows a schematic diagram of an application scenario involved in a focusing method provided by an embodiment of the present application.
- the user opens the camera application program of the electronic device 100, uses the camera to capture the scene 20, and the electronic device 100 displays the interface 100 as shown in FIG. 1B.
- the interface 100 may include a preview screen 101 , a menu bar 102 , an album 103 , a shooting control 104 , and a switching camera control 105 .
- the menu bar 102 may include options such as aperture, night scene, portrait, photo, video, professional, and more, and the user may select a photo mode according to his or her needs.
- the car 106 can be selected as the focus point.
- the electronic device receives the user's After the operation of selecting the focus point, in response to the operation, the focusing system performs focusing based on the focus point selected in the preview image 101, so that the image in the region of interest is clearer.
- Determining the focus point is an important step in the focusing process of the electronic device. Accurate position information of the focus point can improve the quality of captured images and meet the needs of users. In the prior art, the determination of the focus point and focusing based on the focus point can be performed in the following ways:
- the operation of selecting the focus point may be a touch operation.
- the user is interested in a scene (such as the car 106 in the figure) in the preview screen 101, and at this time, the car 107 can be selected as the focus point, and the user can focus on the car 107 in the preview screen 101.
- the electronic device 100 receives the touch operation, and in response to the operation, obtains the position information of the car 107 in the preview screen. Further, the electronic device 100 focuses based on the location information of the car 107 in the preview picture, and obtains a clear preview picture of the area where the car 107 is located.
- the electronic device can also display an interface 200 as shown in FIG. 2B. As shown in FIG.
- the interface 200 includes a preview screen 202, and the preview screen 202 can display an area of interest 201, wherein the area of interest
- the frame 201 can be determined according to the position of the focus point car 106, and can be used to indicate the area where the subject of interest of the user is located. It can be a dotted frame 201 as shown in FIG. 2B, or other shapes such as a rectangular frame, a circle Boxes, triangles, etc. to indicate the area of interest.
- the electronic device 100 may identify the specific subject according to the acquired features of the specific subject in the preview image, and use the position of the specific subject in the preview image as the focus point position, and then focus based on the position of the focus point. For example, face focus, the electronic device 100 can recognize the face in the preview image, take the position of the face in the preview image as the position of the focus point, and then focus based on the position of the face in the preview image to obtain the preview image A clear image of the area where the face is located in the middle.
- the subjects people are interested in are very different, and the types of subjects that the electronic device 100 can identify are limited.
- the subjects recognized by the electronic device 100 on the preview screen may also be It is not the subject that the user is interested in. Therefore, this method cannot determine the focus point, and thus cannot meet the user's requirement for the captured image, that is, the image of the area where the subject of interest to the user is located is clear.
- an embodiment of the present application provides a focusing method, which includes: an electronic device (such as a mobile phone, a tablet computer, etc.) displays a preview image through a display screen, and the preview image can be It is the image collected by the electronic device through the rear camera or the front camera; the electronic device can detect the user's first operation; in response to the first operation, the electronic device obtains the user's target image through the front camera; the electronic device based on the user's The target image determines the position of the user's attention point on the preview screen displayed by the electronic device, and the area where the attention point is located is the user's area of interest; the electronic device collects the preview screen based on the position of the attention point.
- an electronic device such as a mobile phone, a tablet computer, etc.
- the electronic device can also focus based on the acquired position of the user's attention point in the preview image, which is easy to operate.
- the electronic device obtains the user's target image in real time through the front camera and determines the position of the attention point, providing a stable focus point input source, and the electronic device can continuously focus, thereby continuously obtaining clear images of the user's interest area.
- the second operation is used to trigger the attention point focusing function
- the target image is an image collected by the electronic device through the front camera, and may be an image including the user's face and eyes.
- a focusing method provided by the embodiment of the present application can be applied in the shooting scene shown in FIG.
- a user interface involved in a focusing method provided in an embodiment of the application may be executed by the electronic device 100 in FIG. 1B .
- the electronic device includes a front camera 30 shown in FIG. 3 and a rear camera (not shown in FIG. 3 ).
- the user turns on his/her electronic device, so that the display screen of the electronic device displays the desktop of the electronic device.
- FIG. 3 is a schematic diagram of an interface of a user's electronic device provided in an embodiment of the present application. and menu bar 32 .
- the status column 31 includes the operator, current time, current geographic location and local weather, network status, signal status and power supply.
- the menu bar 32 includes icons of at least one application program, each application program has a corresponding application program name below the icon, for example: camera 110, mailbox 115, cloud sharing 116, memo 117, settings 118, gallery 119, phone 120, Short message 121 and browser 122. Wherein, the positions of the icon of the application program and the name of the corresponding application program can be adjusted according to the preference of the user, which is not limited in this embodiment of the present application.
- FIG. 3 is an exemplary display of the embodiment of the present application, and the schematic diagram of the interface of the electronic device may also be in other styles, which is not limited in the embodiment of the present application.
- the user can input an operation for the camera 110 in the menu bar 32, and the electronic device receives the operation, and further responds to the operation, and displays an interface as shown in FIG. 4A
- the interface is A schematic interface diagram of a camera of an electronic device provided in an embodiment of the present application.
- the interface includes a camera menu bar 41, a preview screen 42, an attention focus control 401, an album 40A, a shooting control 40B, a conversion camera control 40C, a smart vision switch 40D, an artificial intelligence (AI) shooting switch 40E, A flash switch 40F, a filter switch 40G, and a setting control 40H. in:
- the camera menu bar 41 can include the options of multiple camera modes such as aperture, night scene, portrait, taking pictures, video recording, professional, and more. Different camera modes can realize different shooting functions.
- the "triangle" in the camera menu bar 41 points to The camera mode of is used to indicate the initial camera mode or the camera mode selected by the user. As shown in 402 in FIG. 4A, the "triangle" points to "photographing", indicating that the camera is currently in the camera mode.
- the preview image 42 is an image collected by the electronic device in real time through the front camera or the rear camera.
- the attention point focusing control 401 can be used to trigger focusing based on the attention point.
- the attention point is the position where the user's eye sight falls on the display screen of the electronic device, that is, the position where the user's eye sight falls on the preview image 42 .
- the photo album 40A is used for the user to view the pictures and videos that have been taken.
- the shooting control 40B is configured to make the electronic device take pictures or videos in response to user operations.
- the switching camera control 40C is used to switch the camera for capturing images between the front camera and the rear camera.
- the smart vision switch 40D is used to turn on or off the smart vision.
- the smart vision can be used for object recognition, shopping, translation, and code scanning.
- An artificial intelligence (AI) shooting switch 40E is used to turn on or off the AI shooting.
- the flash switch 40F is used to turn on or turn off the flash.
- the filter switch 40G is used to turn on or off the filter.
- the setting control 40H is used to set various parameters when collecting images.
- the operation to trigger focusing based on the attention point may be a touch (click) operation input on the attention point focus control 401 as shown in FIG. 4A .
- the user can input an operation for the camera 110 in the menu bar 32, and the electronic device receives the operation, and further responds to the operation, and displays an interface as shown in FIG. 4B
- the interface is A schematic diagram of an interface of a camera of another electronic device provided in an embodiment of the present application.
- the interface includes a camera menu bar 43, a preview screen 44, an album 40A, a shooting control 40B, a conversion camera control 40C, a smart vision switch 40D, an artificial intelligence (AI) shooting switch 40E, a flash switch 40F, and a filter switch 40G, and setting control 40H.
- AI artificial intelligence
- the “triangle" points to "photographing", indicating that the camera is currently in a photographing mode.
- the user wishes to select "More” in the camera menu bar 43.
- the operation of selecting "More” in the camera menu bar 43 may be to input a sliding operation for the camera menu bar 43. Specifically, the sliding operation may be to Slide in the direction indicated by the arrow on the camera menu bar 43 in 4B.
- the above-mentioned operation of selecting "more” in the camera menu bar 43 may also be a touch operation.
- the electronic device displays an interface as shown in FIG. 4C , which includes a camera menu bar 45 and a more menu bar 46 . in:
- the “triangle" in the camera menu bar 45 points to "more", and the more menu bar 46 can include short video control, professional video control, skin beautification control, attention focus control 405, 3D dynamic panorama control, panorama control, HDR Controls, super night scene controls, time-lapse photography, etc.
- the operation to trigger focusing based on the attention point may be a touch (click) operation input on the attention point focus control 405 as shown in FIG. 4C .
- the operation to trigger focusing based on the attention point may be a touch operation input on the camera 110 in the menu bar 32, that is, after the camera is turned on, the electronic device starts focusing based on the attention point. the process of.
- the operation of triggering focusing based on the attention point is not limited to the methods provided in the above embodiments, and may also be operations such as voice control.
- voice control as an example, after the user turns on the camera, the electronic device collects the voice through the microphone, and can recognize whether the voice includes voice control information such as "turn on the attention focus" and "focus". The focus is based on the point of attention in the trigger.
- FIG. 4A-FIG. 4C are exemplary representations of the embodiment of the present application, and the schematic interface diagrams of the electronic device may also be in other styles, which are not limited in the embodiment of the present application.
- the electronic device When an operation that triggers focusing based on the attention point is detected, the electronic device responds to the operation by acquiring the target image of the user through the front camera to determine the position information of the user's gaze on the preview screen displayed by the electronic device, that is, the user location information of attention points.
- the location information of the attention point may include the coordinates of the attention point on the plane coordinate system (ie, the display screen coordinate system) where the preview image is located.
- the process for the electronic device to determine the location information of the user's attention point may refer to the relevant description of S901-S907 in FIG. 9 below.
- the electronic device when the electronic device detects that the user has input an operation on the focus point control 401 in the interface shown in FIG. 4A , in response to the operation, the interface shown in FIG. 4D may be displayed.
- the force point focus control 401 changes from the first color (for example, gray) to the second color (for example, black), indicating that the attention point-based focus function has been turned on.
- the display form of the attention-point focusing control indicating to enable or disable the attention-point-based focusing function is not limited to a color change, and may also be a display form of different transparency.
- the electronic device displays an interface as shown in FIG.
- the interface includes prompt information, such as "please look at the dot below", for prompting
- the user looks at the calibration point 408 in the interface.
- the calibration point 408 is used by the electronic device to determine the user's face model and eyeball structure parameters.
- the specific process description for the electronic device to determine the user's face model and eyeball structure parameters can be found in the following Related description of S903 in FIG. 9 .
- the electronic device when the electronic device detects that the user has input an operation on the attention focus control 405 in the interface as shown in FIG. 4C , in response to the operation, the interface as shown in FIG. 4E may also be displayed.
- the interface includes an icon 406 for indicating that the focusing function based on attention points has been enabled, and a control 407 for disabling the focusing function based on attention points.
- the user may input an operation on the control 407 to close the function.
- the electronic device displays an interface as shown in FIG.
- the interface in response to the above-mentioned operation on the focus control input of the attention point, and the interface includes prompt information, such as "please look at the dot below", for prompting
- the user looks at the calibration point 409 in the interface.
- the calibration point 409 is used by the electronic device to determine the user's face model and eyeball structure parameters.
- the specific process description for the electronic device to determine the user's face model and eyeball structure parameters can be found in the following Related description of S903 in FIG. 9 .
- the preview image displayed by the electronic device may be an image collected by a rear camera, or an image collected by a front camera.
- the user turns on the camera and uses the front camera of the electronic device to take a selfie, and the electronic device displays the preview screen captured by the front camera, and the electronic device responds to triggering an operation based on attention points, and determines the user's target image through the obtained user's target image.
- the position information of the attention point on the display screen is further used for focusing according to the position information of the attention point.
- the target image of the user obtained by the electronic device may be obtained from the preview screen, or directly from the image collected by the front camera (that is, the image collected by the front camera is divided into two channels, and one channel is used to pass through the display screen. displayed as a preview screen, and another way is used to determine the position information of the user's attention point).
- the electronic device performs focusing based on the determined location information of the user's attention point.
- the focusing process for the attention point can be triggered through the following implementation methods. It should be noted that several implementations of triggering the focusing process for the point of attention will be described by taking the preview image as an image collected by the rear camera of the electronic device as an example.
- Implementation mode 1 When the user's gaze duration on an attention point satisfies a preset condition, the electronic device triggers a focusing process on the attention point.
- Gaze duration is the period of time between when the user's gaze falls on the attention point on the display screen and leaves the attention point.
- the electronic device focuses on the camera that captures the preview image.
- the electronic device places the focusing frame based on the position information of the attention point, and then executes the focusing process according to the phase difference data in the focusing frame, where the phase difference data is the phase difference in the focusing frame.
- the focusing process includes the electronic device determining the distance and direction of the lens to be adjusted according to the phase difference data in the focusing frame, and driving the voice coil motor to adjust the position of the lens in the camera, that is, changing the distance (image distance) between the lens and the image sensor, so that the user pays attention
- the image of the area where the force point is located is clear.
- the electronic device determines the position information of the attention point where the user's line of sight falls on the preview screen at each moment from the user's target image acquired in real time through the front camera, and records the user's gaze duration for each attention point.
- the preset condition is that the fixation duration for the attention point is not less than the first duration threshold.
- the electronic device performs a focusing process.
- For the focusing process refer to S906- in FIG. 9 below.
- the electronic device performs focusing based on the position information of the first attention point.
- the electronic device starts counting when it detects that the user's line of sight falls on the first attention point, until the electronic device determines that the user's attention point changes, that is, the determined position information of the attention point is not the position information of the first attention point.
- the electronic device restarts timing.
- the electronic device may display a timer, and the timer may be used to indicate the length of time the user's gaze falls on the attention point.
- the preset condition for the electronic device to perform focusing is to detect that the duration of the user's gaze on the first attention point is not less than 3s.
- the electronic device determines the position information of the user's first attention point, as shown in FIG. 5A, if the user's first attention point is the position 501 on the preview screen, at this time, the electronic device displays a timer 502 for displaying the user's The duration of the line of sight falling on the position 501, as shown by the timer 502 in FIG. 5A , starts from 0.0s.
- the electronic device detects that the value in the timer changes to 3.0s as shown in 503 as shown in Figure 5B, that is, the electronic device determines that the user's line of sight falls on the first point of attention for 3.0s, and executes based on the first The focusing process of the location information of the attention point.
- the timer may also be a countdown timer.
- the electronic device determines the position information of the user's first attention point, it displays a countdown timer 503 as shown in FIG. 5B , and the countdown timer displays 3.0s.
- the electronic device detects that the value of the countdown timer changes to 0.0s as shown at 502 in FIG. 5A , it executes a focusing process based on the position information of the first attention point.
- the electronic device detects that the user’s attention point changes from position 501 in FIG. 5A to position 504 in FIG. Start timing, and record the duration of the user's focus on position 504 .
- the preset condition is that after the user's gaze duration on the first attention point is not less than the second duration threshold, and the duration of the user's gaze away from the first attention point is less than the third duration threshold, the electronic The device performs focusing based on the location information of the first attention point. Specifically, at a certain moment after the user's gaze duration on the first attention point is not less than the second duration threshold, the electronic device detects that the user's gaze leaves the first attention point, and the user's gaze leaves the first attention point for a first duration. Then it falls on the first point of attention.
- the electronic device determines whether the first duration is less than the third duration threshold, and when the first duration is smaller than the third duration threshold, the electronic device does not change the focus information based on the position information of the first attention point, that is, does not adjust the voice coil motor The process of focusing; when the first duration is not less than the third duration threshold, and the first duration is not less than the second duration threshold, the electronic device performs a focusing process based on the position information of the second attention point, wherein the second attention point It is the position where the user's eyes fall on the preview screen during the first duration.
- the user's attention point determined by the electronic device according to the acquired target image of the user may be on the preview screen (such as the above-mentioned second attention point) or not.
- the electronic device detects that the user's attention point is not on the preview screen the electronic device does not perform a focusing process based on the attention point.
- the second duration threshold is set to 3s
- the third duration threshold is set to 1s.
- the electronic device detects that the user's attention point is on position 501 in FIG. 5D , and at a certain moment after the gaze duration of position 501 is not less than 3s (as shown by timer 502 in FIG. 5D , the duration is 4.0s), the electronic device The device detects that the user's attention point changes to position 506 in FIG. 5D , and after 0.5s of gazing at position 506 , the user's attention point returns to position 501 .
- the electronic device detects that the user's attention point leaves the location 501 for 0.5s less than the third duration threshold 1s, and continues to perform focusing based on the location information of the location 501 . It should be understood that after the electronic device detects that the user's attention point is fixed at position 501 for a duration of not less than 3 seconds, it places a focus frame based on the position information of position 501 and focuses on it. At the third duration threshold, focusing is still performed based on the position information of the position 501 without changing the focusing information, so the electronic device does not need to adjust the focusing process of the voice coil motor, thus reducing resource consumption of the electronic device.
- the preset condition is that the user's line of sight leaves the first attention point and falls on the second attention point, and the user's physiological characteristics change when falling on the second attention point.
- the electronic device focuses based on the location information of the first attention point.
- the electronic device detects that the user's line of sight leaves the first attention point and falls on the second attention point.
- the electronic device determines that the physiological characteristics of the user's eyes have changed through the collected target image of the user. For example, the pupil is dilated, and the electronic device performs a process of focusing based on the position information of the second attention point.
- the electronic device When the electronic device does not detect that the user's physiological characteristics of the eyes change, the focusing information based on the position information of the first attention point is not changed, that is, the process of adjusting the voice coil motor for focusing is not performed. It should be understood that, in some implementations, when the electronic device detects a change in the user's physiological characteristics, it may place a focusing frame based on the position information of the user's attention point at this time, and perform focusing without judging whether the gaze duration satisfies the condition.
- the third duration threshold is set to 3s.
- the electronic device detects that the user's attention point is on position 501 in FIG. 5E , and at a certain moment after the fixation duration at position 501 is not less than 3s.
- the duration shown by timer 502 in Figure 5E is 4.0s
- the electronic device detects that the user's attention point changes to the position 504 in Figure 5E, and the electronic device recognizes that the pupil size in the target image of the user corresponds to Compared with the size of the pupil when the attention point is at position 501 , it can be determined that the pupil is dilated.
- the electronic device places a focusing frame based on the position information of position 504 and performs focusing.
- first duration threshold, the second duration threshold, and the third duration threshold may or may not be the same value, and this embodiment of the present application does not limit the value of the duration threshold.
- the electronic device may perform smooth focusing according to a change of the user's attention point. Specifically, when the electronic device detects that the user's gazing time for a point of attention is not less than the fourth duration threshold, it executes placing the focusing frame based on the position information of the point of attention, and then drives the voice coil motor to adjust the distance between the lens and the image sensor. The process of focusing at a distance. For example, when there are multiple scenes or characters in the preview screen, the user may be interested in each scene or character, and the user's attention will shift from one scene or character to other adjacent scenes or characters.
- the device When the device detects that the user's attention point is fixed on the first scene or person for a duration not less than the fourth duration threshold, it will focus based on the location information of the attention point (the location information of the first scene or person), and when it detects the user's attention When the power point is transferred to the second scene or character adjacent to the first scene or character, and the gaze duration on the second scene or character is not less than the fourth duration threshold, the position information (second location information of the scene or person) to focus. When the user's attention is shifted from one position to its adjacent position, the electronic device drives the voice coil motor to adjust the position of the lens, and only a small distance is required to move the lens at this time.
- the fourth duration threshold is 2s.
- the user's attention is first on the character 601 in FIG. 6A , then shifts to the character 603 adjacent to the character 601 , and finally shifts from the character 603 to the adjacent character 605 .
- the electronic device detects the change process of the user's attention point as shown in Figures 6B-6D.
- the electronic device detects that the user's attention point is fixed on the character 601 for 3.0s, as shown by the timer 602 in Figure 6B.
- the electronic device determines that the distance to adjust the lens is the first distance based on the position information of the person 601, and further drives the voice coil motor to adjust the first distance to move the lens to the first position.
- the electronic device determines the distance to adjust the lens as the second distance based on the position information of the person 603, and further drives the voice coil motor to adjust the second distance to move the lens to the second position.
- the electronic device When the electronic device detects that the user's focus is shifted from the character 603 to its adjacent character 605, a timer 606 as shown in Figure 6D is displayed, and the timer 606 shows that the gaze duration is 3.0s and not less than the fourth duration threshold of 2s , the electronic device determines, based on the position information of the person 605, that the distance to adjust the lens is a third distance, and further drives the voice coil motor to adjust the third distance, and moves the lens to a third position.
- the person 603 is adjacent to the person 601
- the second distance between the second position of the lens and the first position is relatively small
- the person 605 is adjacent to the person 603
- the third distance between the third position of the lens and the second position is relatively small Small
- the electronic device when the user uses the continuous shooting mode or records a video, the electronic device detects the location of the user's attention point, and when it detects that the attention point shifts to adjacent scenes or characters in successive frames of images, The electronic device performs a smooth focusing process to obtain multi-frame continuous shooting photos or videos that meet the user's needs. Each picture obtained by continuous shooting or each frame of recorded video is obtained by focusing based on its corresponding attention point.
- the user can long press the shooting control 40B as shown in FIG. 6A to start the continuous shooting mode.
- a step-by-step focusing method may be adopted. Specifically, the distance between the attention points of two or more frames of images before and after the continuous shooting of photos or videos can be detected.
- the electronic device can sequentially increase the distance by the first step (for example, 30 ⁇ m) ) Push the voice coil motor to make the lens reach the corresponding position, realize smooth focusing of multiple frames of images, and improve the focusing effect.
- the electronic device after the electronic device detects the location of the user's attention point, it determines an ROI frame based on the location information of the attention point.
- the size of the ROI frame can be preset, and after the location information of the user's attention point is determined, the ROI frame is displayed with the attention point as the geometric center of the ROI frame.
- the electronic device can intelligently detect the images around the attention point, for example, through face recognition, recognize the face, and determine the face frame according to the size of the face, also known as the region of interest frame, Exemplarily, as shown in FIG. 6E , FIG. 6F and FIG. 6G , the electronic device determines three ROI frames 607 , 608 , and 609 based on the location information of the user's attention points at three moments and face recognition, respectively.
- the electronic device may perform focusing in combination with the user's physiological feature information, where the physiological feature information may be the user's expression or action, and is used to trigger the focusing process of the electronic device.
- the electronic device can obtain the user's image through the front camera, and the electronic device can determine whether the facial expression, head movement, and body movement in the user's image match a preset expression or action command.
- the electronic device determines the position of the focus frame based on the position information of the attention point, and then performs the focusing process.
- the electronic device detects the confirmation action of the user such as "nodding" or "OK" gesture, or detects that the user's pupils dilate, the electronic device focuses.
- the electronic device can store preset facial expressions or action instructions.
- Implementation mode 3 The electronic device performs focusing in combination with the voice information of the user.
- the electronic device may perform focusing in combination with voice information of the user. Specifically, the electronic device responds to the above operation of triggering focus based on the attention point, acquires the sound in the environment in real time through the microphone, and recognizes the voice information in the sound in the environment.
- the electronic device determines the position information of the user's attention point, and detects that the user's voice information includes the information of a scene or person in the preview screen (that is, the first focus object), within the preset range where the attention point is located A scene or a person (also referred to as a subject) in the preview image is identified in the area, and further, the electronic device places a focus frame for focusing based on the position information of the subject.
- the area within the preset range where the attention point is located may be an area within a range of preset pixel height and preset pixel width centered on the attention point.
- FIG. 7A is an interface of an electronic device provided by the embodiment of the present application.
- the microphone icon 701 is used to turn on or off the microphone, which can indicate the status of the microphone being turned on or off.
- 701 represents The microphone is in an on state
- 702 in FIG. 7B indicates that the microphone is in an off state.
- the electronic device determines the position of the user's attention point as position 703. During the period when the microphone is turned on, the electronic device detects that the sound collected by the microphone includes the voice information of "car".
- the electronic device recognizes that the user's attention point is within the preset range In the image in the area, the car 704 is recognized, and the electronic device determines the focus point as the car 704 in FIG. 7A , and further, places a focus frame based on the position information of the car 704 for focusing.
- the electronic device may display the ROI frame according to the size of the car 704, as shown in 705 in FIG. 7C.
- the electronic device After determining the location information of the user's attention point, the electronic device identifies the subject (scenery or person) at the location of the user's attention point (position A), and then focuses based on the location information of the subject at position A.
- position B When the electronic device detects that the user's attention is shifted to another position (position B), the electronic device recognizes the scene or person within the preset range of position B, and when it recognizes that the subject at position A is of the same type, it is recorded as The subject at position B is the subject, and position information of the subject is determined. Further, the electronic device determines a focus frame based on the position information of the subject at position B, and then performs a focusing process.
- the same category refers to scenes or people with the same characteristics, such as human faces, buildings, roads, vehicles, plants, animals, etc. It should be understood that the location information of location A and the location information of the subject at location A may be the same , there may also be a certain deviation.
- the electronic device determines that the user's attention point is at position 801 in FIG. 8A , and recognizes that the subject at position 801 is a human face 802 . location information to focus.
- the electronic device detects that the user's attention is shifted to the position 803 in FIG. 8B , it identifies whether there are subjects of the same type as the face 802 around the position 803 .
- the electronic device recognizes the human face 804 within the preset range of the position 803 , and further, focuses based on the position information of the human face 804 .
- the electronic device when the electronic device detects the situation described in any one of the foregoing implementation manners 1 to 4, it executes a focus-based process.
- a focusing method provided by an embodiment of the present application will be described in detail below with reference to the accompanying drawings.
- the method provided in the embodiment of the present application may be applied in a scene where the electronic device focuses as shown in FIG. 1B , and the method may be executed by the electronic device 100 in FIG. 1B .
- the method may include but not limited to the steps shown in Figure 9:
- the electronic device receives a user's first operation for triggering an attention-based focusing function.
- the first operation may be the operation described above to trigger focusing based on the point of attention.
- the first operation reference may be made to the relevant description above, and details will not be repeated here.
- the electronic device acquires the target image through the front camera in response to the above first operation.
- the target image is an image collected by the electronic device through a front-facing camera, and may be an image including a user's face and eyes.
- the electronic device after the electronic device displays the preview image, the electronic device receives the first operation, and then performs S902.
- the first operation is a touch operation input to the camera 110 in FIG. 3 above
- the electronic device receives the first operation, and in response to the operation, displays a preview image captured by the default camera, and Obtain the target image through the front camera.
- the electronic device determines position information of the user's attention point on the display screen of the electronic device based on the target image.
- the electronic device processes the acquired target image through an image processing method to obtain image parameters and eye structure parameters of the user. Further, the image parameters and eye structure parameters obtained by the electronic device are used by the user to The location information of the attention point on the display screen of the electronic device.
- FIG. 10 is a schematic diagram of a 3D eye model provided by an embodiment of the present application.
- the geometric center of the display screen of the electronic device is the origin of the world coordinate system
- P i is the center of the iris
- O c is the center of the cornea
- O e is the center of the eyeball
- V o is the unit vector of the optical axis
- V g is the visual axis unit vector
- the visual axis of the eye is defined as the line from the corneal center Oc to the attention point Pg on the plane where the display screen of the electronic device is located
- the optical axis is the line between the eyeball center and the corneal center.
- the attention point Pg can be expressed as:
- the eyeball center O e , the optical axis unit vector V o and the visual axis unit vector V g can be determined based on the user's target image, and then c and ⁇ can be obtained and substituted into formula (1) to obtain the position information of the attention point P g .
- the following describes in detail how the electronic device determines the position information of the user's attention point on the display screen of the electronic device through the target image through steps S903-S905.
- the electronic device determines image parameters and eyeball structure parameters of the user based on the target image.
- the image parameters include the conversion relationship (R, t) required for the conversion of the user's eyeball center in the head coordinate system to the world coordinate system where the electronic device is located and the position information of the iris center Pi.
- the above conversion relationship (R, t) is used To determine the position information of the world coordinate system of the user's eyeball center, there will be different conversion relationships for different head postures of the user.
- the conversion relationship may include the rotation parameter R and the translation parameter t.
- the rotation parameter R It can be a rotation matrix
- the translation parameter t can be a translation matrix.
- the eyeball structure parameters include the coordinates of the eyeball center Oe in the head coordinate system And the Kappa angle in Figure 10, the Kappa angle is the angle between the actual receiving and the optical axis, including the horizontal component ⁇ and the vertical component ⁇ .
- the electronic device determines the eyeball structural parameters It can be realized by the calibration method of the center of the eyeball and the calibration method of the Kappa angle.
- the process for the electronic device to calibrate the eyeball structure parameters may include: the electronic device establishes the user's head coordinate system, and when receiving an operation that triggers the focus of attention, the electronic device responds to the operation by displaying the image shown in Figure 4G or Figure 4G interface, instructing the user to look at a calibration point (such as 408 in FIG. 4G or 409 in FIG. 4G ) displayed on the display screen of the electronic device for a preset duration (for example, 1s). At this time, the electronic device can acquire the image of the user, and Calculate the user's eyeball structure parameters
- the electronic device determines the structural parameters of the eyeball based on the target image It can also be realized through multiple calibration points. For example, the electronic device sequentially displays multiple calibration points at different positions, and instructs the user to stare at each calibration point for a preset duration. Determine the user's eye structure parameters
- the electronic device determines the image parameters (R, t, Pi) based on the target image may include the following process:
- the electronic device uses face recognition technology and eye recognition technology in image processing to respectively recognize the face and eyes in the target image.
- the electronic device determines the conversion relationship (R, t) between the head coordinate system and the world coordinate system based on the face in the target image.
- the electronic device includes a sensor that can be used to obtain the position information of the feature points of the face, for example, a Kinect sensor.
- the electronic device detects the position information of the feature points of the face at time t' through the sensor, and then refers to the The position information of the feature points in the face model and the transformation relationship of the feature points in the face model from the head coordinate system to the world coordinate conversion determine the rotation matrix R( Including yaw angle, pitch angle and roll angle) and translation matrix t, that is, the conversion relationship between the head coordinate system and the world coordinate system (R, t).
- the human face model is a reference model determined by keeping the human face facing the display screen of the electronic device for a period of time, and using images collected by the front camera.
- the human face model can be made by facing the calibration point on the display screen of the electronic device (such as 408 in Figure 4G or 409 in Figure 4G), and this process can be related to the above-mentioned determination of the user's eye structure parameter Execute at the same time.
- the electronic device determines the coordinates of the iris center P i based on the image of the eye in the target image.
- the electronic device may determine the coordinates of the center of the iris using an image gradient method. Specifically, the electronic device may use the following formula to determine the coordinates of the iris center Pi ,
- h' is the iris center, that is, P i , h is the potential iris center, d i is the displacement vector, g i is the gradient vector, and N is the number of pixels in the image.
- the electronic device may scale the displacement vector di and the gradient vector gi into unit vectors to obtain equal weights for all pixels.
- the electronic device determines the gradient vector of the pixel x i in the eye image; determines the displacement vector between x i and the potential iris center h, and each pixel is a potential iris center, that is, to determine the relationship between the pixel x i and the eye
- the displacement vector of each pixel in the image of the part determine the dot product of the gradient vector of the pixel point x i and all the displacement vectors of the pixel point x i ; determine the mean value of the dot product of the pixel point x i to be the gradient vector of x i and all
- the mean value of the dot product of the displacement vector; the pixel point x max with the largest mean value of the dot product is taken as the iris center, and the coordinate of the pixel point x max is the coordinate of the iris center Pi .
- the electronic device is based on the image parameters (R, t, P i ) and the user's eyeball structure parameters Determine the position information of the eyeball center O e , the optical axis unit vector V o and the visual axis unit vector V g .
- the head coordinate system is determined by the physiological structure of the human head
- the world coordinate system is determined by electronic equipment
- the center of the eyeball has the following transformation in the two coordinate systems:
- the optical axis unit vector V o can be determined according to the following formula:
- r e is the radius of the eyeball, usually between 11-13mm
- V o the trigonometric function expression of the optical axis unit vector
- the deflection angle of the optical axis unit vector V o is
- the optical axis unit vector V o is rotated by the Kappa angle to obtain the visual axis unit vector V g :
- the electronic device determines the position information of the user's attention point P g on the display screen of the electronic device based on the determined position information of the eyeball center O e , the optical axis unit vector V o and the visual axis unit vector V g .
- the electronic device substitutes the determined O e , V o and V g into the above ⁇ to obtain ⁇ , and further, substitutes the determined O e , V o , V g and ⁇ into the above formula (1),
- the three-dimensional coordinates (world coordinate system) of the attention point P g of the user's line of sight on the display screen of the electronic device are obtained.
- the electronic device focuses based on the location information of the attention point.
- the electronic device first determines the focus frame based on the position information of the attention point, and then uses the phase difference data in the focus frame to focus.
- the electronic device determines the focus frame based on the position information of the attention point on the display screen.
- the electronic device can preset the size and number of focus frames. Each focus frame is the same size. After determining the location information of the attention point, the positions of multiple focus frames can be determined based on the attention point. Further, each The focusing frames are placed on their respective corresponding positions. Optionally, the electronic device may display each focusing frame.
- the electronic device may determine the position of the focus frame centered on the determined attention point, or may determine the position of the focus frame centered on the position of the subject determined based on the attention point.
- the electronic device may determine the position of the focusing frame centering on the determined attention point.
- the electronic device detects that the user's gaze duration on the attention point satisfies the preset condition described in the above implementation 1, and determines the focus frame with the attention point as the center s position.
- the preset conditions described in Implementation Mode 1 refer to the description above, which will not be repeated here.
- the electronic device may first determine whether the user's physiological feature information matches the preset expression or action instruction, and when the preset expression or action instruction matches, the electronic device The device takes the attention point as the center to determine the placement position of the focusing frame.
- the electronic device may preset five focusing frames 1301 as shown in FIG. 13A .
- the position of the attention point determined by the electronic device is position 1306.
- the electronic device determines five focus frames 1301, 1302, 1303, 1304, 1305 and 1306.
- Implementation manner B The electronic device determines the position of the focus frame centering on the position of the subject determined based on the attention point.
- the electronic device may determine the location of the focusing frame in combination with the user's voice information. Specifically, the electronic device responds to the above operation of triggering focus based on the attention point, acquires the sound in the environment in real time through the microphone, and recognizes the voice information in the sound in the environment.
- the electronic device determines the position information of the user's attention point, and detects that the user's voice information includes information about a scene or person in the preview screen, it recognizes the location of the above-mentioned preview screen in the area within the preset range where the attention point is located.
- a scene or a person furthermore, the electronic device determines the position of the focus frame with the position of the subject as the center.
- the area within the preset range where the attention point is located may be an area within a range of preset pixel height and preset pixel width centered on the attention point.
- the electronic device determines the position information of the user's attention point, it identifies the subject (scenery or person) at the position of the user's attention point (position A), and then proceeds based on the position information of the subject at position A. focus.
- position B the electronic device detects that the user's attention is shifted to another position (position B)
- the electronic device recognizes the scene or person within the preset range of position B, and when it recognizes that the subject at position A is of the same type, it is recorded as The subject at position B is the subject, and the position information of the subject is determined. Further, the electronic device determines the focus frame with the position of the subject as the center, and then executes the focusing process.
- the same category refers to scenes or people with the same characteristics, such as human faces, buildings, roads, vehicles, plants, animals, etc. It should be understood that the location information of location A and the location information of the subject at location A may be the same , there may also be a certain deviation.
- FIGS. 8A-8B For an exemplary process description of the related electronic device focusing based on the last focused subject, reference may be made to the descriptions of FIGS. 8A-8B in the above-mentioned implementation mode 4, which will not be repeated here.
- the electronic device may preset five focusing frames 1301 as shown in FIG. 13A .
- the position of the subject determined by the electronic device based on the position of the attention point is position 1306.
- the electronic device centers on the position 1306 and determines five focus frames 1301, 1302 as shown in FIG. 13C based on the attention point. , 1303, 1304, 1305 and 1306.
- the number of preset focus frames may be 5 in the above example, or may be other, such as 9, which is not limited in this embodiment of the present application.
- the way the electronic device places the focusing frame is not limited to the cross shape shown in FIG. 13C above, and may also be a back shape, a nine-square grid, etc.
- the embodiment of the present application does not limit the placement method and position of the focusing frame.
- the position and size of the focus frame can also be determined based on the region of interest frame determined based on the position information of the attention point in FIGS. 6E-6G, 7C, 8A and 8B above, for example , the size of the focusing frame is the ROI box, and the position is placed at the center of the ROI box.
- the electronic device focuses using the phase difference data in the focusing frame.
- the electronic device uses phase focusing to focus. Specifically, the electronic device obtains the phase difference data (such as the mean value of the phase difference) of the image in the focus frame through an image sensor with phase difference detection pixels, and then searches the look-up table for the target offset corresponding to the phase difference data, further, the electronic device The device drives the voice coil motor to move the target offset to adjust the position of the lens to achieve focus.
- the offset includes the distance and direction between the lens and the focal plane
- the look-up table includes a plurality of phase difference data and their corresponding offsets.
- the look-up table can be obtained through calibration of a fixed chart, that is, for a fixed chart, Move the lens, that is, change the offset, calculate the phase difference corresponding to each offset, and record it as a lookup table.
- the electronic device determines that the average value of the phase differences in multiple focus frames is the target phase difference data, and further, the electronic device searches for the target offset corresponding to the target phase difference data, thereby driving the voice coil motor to move Target offset to adjust the position of the lens to focus on the subject at the point of attention.
- the target phase difference data may also be other values calculated based on the phase differences in multiple focus frames.
- the target phase difference data is the maximum value of the phase differences in multiple focus frames. Not limited.
- the target phase difference data may be the average value of the phase differences in the focus frames 1301 , 1302 , 1303 , 1304 , 1305 and 1306 .
- the electronic device may also use contrast focusing, laser focusing, or combined focusing to focus.
- Combined focus is any two or any three of phase focus, contrast focus, and laser focus.
- the focusing manner is not limited in this embodiment of the present application.
- the exemplary electronic device 100 provided by the embodiment of the present application is introduced below.
- FIG. 14 shows a schematic structural diagram of the electronic device 100 .
- the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
- SIM subscriber identification module
- the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
- the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 .
- the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
- the illustrated components can be realized in hardware, software or a combination of software and hardware.
- the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
- application processor application processor, AP
- modem processor graphics processing unit
- GPU graphics processing unit
- image signal processor image signal processor
- ISP image signal processor
- controller memory
- video codec digital signal processor
- DSP digital signal processor
- baseband processor baseband processor
- neural network processor neural-network processing unit
- the controller may be the nerve center and command center of the electronic device 100 .
- the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
- a memory may also be provided in the processor 110 for storing instructions and data.
- the memory in processor 110 is a cache memory.
- the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
- processor 110 may include one or more interfaces.
- the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
- I2C integrated circuit
- I2S integrated circuit built-in audio
- PCM pulse code modulation
- PCM pulse code modulation
- UART universal asynchronous transmitter
- MIPI mobile industry processor interface
- GPIO general-purpose input and output
- subscriber identity module subscriber identity module
- SIM subscriber identity module
- USB universal serial bus
- the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
- processor 110 may include multiple sets of I2C buses.
- the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flashlight, the camera 193 and the like through different I2C bus interfaces.
- the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 100 .
- the I2S interface can be used for audio communication.
- processor 110 may include multiple sets of I2S buses.
- the processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
- the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
- the PCM interface can also be used for audio communication, sampling, quantizing and encoding the analog signal.
- the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
- the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
- the UART interface is a universal serial data bus used for asynchronous communication.
- the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
- a UART interface is generally used to connect the processor 110 and the wireless communication module 160 .
- the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function.
- the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
- the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
- MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
- the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 .
- the processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .
- the GPIO interface can be configured by software.
- the GPIO interface can be configured as a control signal or as a data signal.
- the GPIO interface can be used to connect the processor 110 with the camera 193 , the display screen 194 , the wireless communication module 160 , the audio module 170 , the sensor module 180 and so on.
- the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
- the USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
- the USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as AR devices.
- the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
- the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
- the charging management module 140 is configured to receive a charging input from a charger.
- the charger may be a wireless charger or a wired charger.
- the charging management module 140 can receive charging input from the wired charger through the USB interface 130 .
- the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also provide power for electronic devices through the power management module 141 .
- the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
- the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
- the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
- the power management module 141 may also be disposed in the processor 110 .
- the power management module 141 and the charging management module 140 may also be set in the same device.
- the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
- Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
- Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
- Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
- the antenna may be used in conjunction with a tuning switch.
- the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 .
- the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
- the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
- the mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation.
- at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 .
- at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
- a modem processor may include a modulator and a demodulator.
- the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
- the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
- the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
- the application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 .
- the modem processor may be a stand-alone device.
- the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
- the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100.
- System global navigation satellite system, GNSS
- frequency modulation frequency modulation, FM
- near field communication technology near field communication, NFC
- infrared technology infrared, IR
- the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
- the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
- the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
- the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
- the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc.
- GSM global system for mobile communications
- GPRS general packet radio service
- code division multiple access code division multiple access
- CDMA broadband Code division multiple access
- WCDMA wideband code division multiple access
- time division code division multiple access time-division code division multiple access
- TD-SCDMA time-division code division multiple access
- the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
- GPS global positioning system
- GLONASS global navigation satellite system
- Beidou navigation satellite system beidou navigation satellite system
- BDS Beidou navigation satellite system
- QZSS quasi-zenith satellite system
- SBAS satellite based augmentation systems
- the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
- the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
- Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
- the display screen 194 is used to display images, videos and the like.
- the display screen 194 includes a display panel.
- the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
- the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
- the display panel can be realized by using OLED, AMOLED, or FLED, so that the display screen 194 can be bent.
- a display screen that can be bent is called a foldable display screen.
- the foldable display screen may be one screen, or may be a display screen composed of multiple screens pieced together, which is not limited here.
- the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
- Camera 193 is used to capture still images or video.
- the object generates an optical image through the lens and projects it to the photosensitive element.
- the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
- CMOS complementary metal-oxide-semiconductor
- the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
- the ISP outputs the digital image signal to the DSP for processing.
- DSP converts digital image signals into standard RGB, YUV and other image signals.
- the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
- the camera 193 includes a front camera and a rear camera, wherein the front camera can obtain a target image for the user, and the electronic device determines the position information of the user's attention point on the display screen of the electronic device based on the target image.
- the target image refers to the related description of S902 in FIG. 9 above.
- the process of determining the position information of the attention point based on the target image refer to the related description of S903-S905 in FIG. 9 above, which will not be repeated here.
- the ISP is used for processing the data fed back by the camera 193 .
- the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
- ISP can also perform algorithm optimization on image noise, brightness, and skin color.
- ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
- the ISP may be located in the camera 193 .
- Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
- the ISP can also be used to automatically focus on the subject at the attention point based on the position information of the attention point.
- the ISP can also be used to automatically focus on the subject at the attention point based on the position information of the attention point.
- For the specific focusing process please refer to the relevant description in S906-S907 in FIG. 9 above, which will not be repeated here. repeat.
- Video codecs are used to compress or decompress digital video.
- the electronic device 100 may support one or more video codecs.
- the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
- MPEG moving picture experts group
- the NPU is a neural-network (NN) computing processor.
- NN neural-network
- Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
- the NPU may be used to identify images of human faces and eyes in the target image.
- the NPU can also be used to identify the subject around the attention point based on the attention point, and further focus based on the position information of the subject.
- this process please refer to the relevant descriptions of the implementations B and S907 in S906 in FIG. 9 above, here No longer.
- the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100.
- the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
- the internal memory 121 may be used to store computer-executable program codes including instructions.
- the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 .
- the internal memory 121 may include an area for storing programs and an area for storing data.
- the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
- the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
- the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
- the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
- the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
- the audio module 170 may also be used to encode and decode audio signals.
- the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
- Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
- Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
- Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
- the receiver 170B can be placed close to the human ear to receive the voice.
- the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a phone call or sending a voice message, the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C.
- the electronic device 100 may be provided with at least one microphone 170C. In some other embodiments, the electronic device 100 may be provided with two microphones 170C, which may also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
- the earphone interface 170D is used for connecting wired earphones.
- the earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
- OMTP open mobile terminal platform
- CTIA cellular telecommunications industry association of the USA
- the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
- pressure sensor 180A may be disposed on display screen 194 .
- pressure sensors 180A such as resistive pressure sensors, inductive pressure sensors, and capacitive pressure sensors.
- a capacitive pressure sensor may be comprised of at least two parallel plates with conductive material.
- the electronic device 100 determines the intensity of pressure according to the change in capacitance.
- the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
- the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
- touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view short messages is executed. When a touch operation whose intensity is greater than or equal to the first pressure threshold acts on the icon of the short message application, the instruction of creating a new short message is executed.
- the gyro sensor 180B can be used to determine the motion posture of the electronic device 100 .
- the angular velocity of the electronic device 100 around three axes may be determined by the gyro sensor 180B.
- the gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse movement to achieve anti-shake.
- the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
- the air pressure sensor 180C is used to measure air pressure.
- the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
- the magnetic sensor 180D includes a Hall sensor.
- the electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case.
- the electronic device 100 when the electronic device 100 is a clamshell machine, the electronic device 100 can detect opening and closing of the clamshell according to the magnetic sensor 180D.
- features such as automatic unlocking of the flip cover are set.
- the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
- the distance sensor 180F is used to measure the distance.
- the electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing.
- Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
- the light emitting diodes may be infrared light emitting diodes.
- the electronic device 100 emits infrared light through the light emitting diode.
- Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
- the electronic device 100 can use the proximity light sensor 180G to detect that the user is holding the electronic device 100 close to the ear to make a call, so as to automatically turn off the screen to save power.
- the proximity light sensor 180G can also be used in leather case mode, automatic unlock and lock screen in pocket mode.
- the ambient light sensor 180L is used for sensing ambient light brightness.
- the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
- the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
- the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket, so as to prevent accidental touch.
- the fingerprint sensor 180H is used to collect fingerprints.
- the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.
- the temperature sensor 180J is used to detect temperature.
- the electronic device 100 uses the temperature detected by the temperature sensor 180J to implement a temperature treatment strategy. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 100 may reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
- the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to prevent the electronic device 100 from being shut down abnormally due to the low temperature.
- the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
- Touch sensor 180K also known as "touch panel”.
- the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
- the touch sensor 180K is used to detect a touch operation on or near it.
- the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
- Visual output related to the touch operation can be provided through the display screen 194 .
- the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
- the bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the human pulse and receive the blood pressure beating signal. In some embodiments, the bone conduction sensor 180M can also be disposed in the earphone, combined into a bone conduction earphone.
- the audio module 170 can analyze the voice signal based on the vibration signal of the vibrating bone mass of the vocal part acquired by the bone conduction sensor 180M, so as to realize the voice function.
- the application processor can analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
- the keys 190 include a power key, a volume key and the like.
- the key 190 may be a mechanical key. It can also be a touch button.
- the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
- the motor 191 can generate a vibrating reminder.
- the motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
- touch operations applied to different applications may correspond to different vibration feedback effects.
- the motor 191 may also correspond to different vibration feedback effects for touch operations acting on different areas of the display screen 194 .
- Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
- the touch vibration feedback effect can also support customization.
- the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
- the SIM card interface 195 is used for connecting a SIM card.
- the SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
- the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
- SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards may be the same or different.
- the SIM card interface 195 is also compatible with different types of SIM cards.
- the SIM card interface 195 is also compatible with external memory cards.
- the electronic device 100 interacts with the network through the SIM card to implement functions such as calling and data communication.
- the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
- the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
- the term “when” may be interpreted to mean “if” or “after” or “in response to determining" or “in response to detecting".
- the phrases “in determining” or “if detected (a stated condition or event)” may be interpreted to mean “if determining" or “in response to determining" or “on detecting (a stated condition or event)” or “in response to detecting (a stated condition or event)”.
- all or part of them may be implemented by software, hardware, firmware or any combination thereof.
- software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
- the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
- the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
- the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, DSL) or wireless (eg, infrared, wireless, microwave, etc.) means.
- the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
- the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, solid state hard disk), etc.
- the processes can be completed by computer programs to instruct related hardware.
- the programs can be stored in computer-readable storage media.
- When the programs are executed may include the processes of the foregoing method embodiments.
- the aforementioned storage medium includes: ROM or random access memory RAM, magnetic disk or optical disk, and other various media that can store program codes.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Disclosed in the embodiments of the present application are a focusing method and a related device. The method comprises: in response to a first operation of a user, an electronic device starting photographing, displaying a first interface, and displaying, in the first interface, preview pictures collected by a camera; and the electronic device displaying a first preview picture in the first interface, wherein the first preview picture is a preview picture collected by the camera by taking an attention point, which satisfies a preset condition, as a focusing point, and the attention point is a position point of the gaze of the user on the first interface. In the embodiments of the present application, automatic focusing can be performed according to an attention point of a user, thereby reducing user operations and improving the user experience.
Description
本申请要求于2021年06月25日提交中国专利局、申请号为202110713771.5、申请名称为“一种对焦的方法及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202110713771.5 and the application title "A focusing method and related equipment" submitted to the China Patent Office on June 25, 2021, the entire contents of which are incorporated in this application by reference middle.
本申请涉及终端技术领域,尤其涉及一种对焦的方法及相关设备。The present application relates to the field of terminal technologies, and in particular to a focusing method and related equipment.
随着智能终端设备的普及,人们在日常生活中经常使用智能终端设备进行照片或视频的拍摄。智能终端设备,如手机、平板电脑等,用户在使用这些设备拍摄照片或视频时,可以在设备显示的预览画面上触摸选择对焦点,设备可以基于用户选择的对焦点的位置驱动音圈马达调整镜片的位置,即改变镜片与图像传感器之间的距离,使得焦平面落在图像传感器上,实现对焦,从而拍摄出对焦点区域清晰的图像。With the popularization of smart terminal devices, people often use the smart terminal devices to take photos or videos in their daily life. Smart terminal devices, such as mobile phones, tablet computers, etc., when users use these devices to take photos or videos, they can touch and select the focus point on the preview screen displayed by the device, and the device can drive the voice coil motor to adjust based on the position of the focus point selected by the user The position of the lens is to change the distance between the lens and the image sensor, so that the focal plane falls on the image sensor to achieve focusing, so as to capture a clear image of the focus area.
然而,在用户单手握持智能终端设备时,触摸选择对焦点的对焦方式存在操作不便的问题。However, when the user holds the smart terminal device with one hand, there is a problem of inconvenient operation in the focusing method of selecting the focusing point by touching.
发明内容Contents of the invention
本申请实施例公开了一种对焦的方法及相关设备,可以根据用户注意力点自动对焦,减少用户操作,提高用户体验。The embodiment of the present application discloses a focusing method and related equipment, which can automatically focus according to the user's attention point, reduce user operations, and improve user experience.
第一方面,本申请实施例公开在一种对焦的方法,包括:响应于用户的第一操作,电子设备开始拍摄,显示第一界面,在所述第一界面显示通过摄像头采集到的预览画面;所述电子设备在所述第一界面显示第一预览画面,所述第一预览画面为所述摄像头以满足预设条件的注意力点为对焦点采集的预览画面,所述注意力点为用户的视线落在所述第一界面的位置点。In the first aspect, the embodiment of the present application discloses a focusing method, including: in response to the user's first operation, the electronic device starts shooting, displays a first interface, and displays a preview image captured by the camera on the first interface ; The electronic device displays a first preview picture on the first interface, the first preview picture is a preview picture collected by the camera to meet the preset condition and the attention point is the focus point, and the attention point is the user's The line of sight falls on the position point of the first interface.
其中,第一操作可以为用户点击屏幕的相机应用,用于开启拍摄的操作。Wherein, the first operation may be an operation in which the user taps the camera application on the screen to start shooting.
在本申请实施例中,在用户握持电子设备进行拍照时,电子设备也可以基于获取到的用户在预览画面中的注意力点的位置进行自动对焦,能够准确对焦到用户的注意力点,操作简便,可以提高用户体验。In the embodiment of the present application, when the user holds the electronic device to take pictures, the electronic device can also automatically focus based on the acquired position of the user's attention point in the preview screen, which can accurately focus on the user's attention point and is easy to operate , which can improve user experience.
在一种可能的实现方式中,所述电子设备在所述第一界面显示第一预览画面,具体包括:所述电子设备通过前置摄像头获取目标图像,所述目标图像包括用户的眼部的图像;所述电子设备基于所述目标图像确定用户的注意力点;在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面;在所述第一界面显示所述第一预览画面。这样,电子设备通过前置摄像头实时获取用户的目标图像并确定注意力点的位置,提供了一个稳定的对焦点的输入源,电子设备能够连续对焦,从而持续获得用户的感兴趣区域清晰的图像,从而可以提高用户体验。In a possible implementation manner, the electronic device displays a first preview image on the first interface, which specifically includes: the electronic device acquires a target image through a front camera, and the target image includes the image of the user's eyes. image; the electronic device determines the user's attention point based on the target image; when the user's attention point satisfies the preset condition, adjusting the focal length of the camera with the user's attention point as the focus point, collecting the first preview image through the camera; and displaying the first preview image on the first interface. In this way, the electronic device obtains the user's target image in real time through the front camera and determines the position of the attention point, providing a stable focus point input source, and the electronic device can continuously focus, thereby continuously obtaining clear images of the user's interest area. Thereby, user experience can be improved.
在一种可能的实现方式中,所述方法还包括:所述电子设备检测到开启注意力点对焦功能的第二操作;响应于所述第二操作,所述电子设备在所述第一界面显示所述第一预览画面。In a possible implementation manner, the method further includes: the electronic device detects a second operation for enabling the focus function; in response to the second operation, the electronic device displays on the first interface The first preview screen.
其中,第二操作,可以是开启注意力对焦功能的触发操作,还可以是开启注意力对焦功能的语音指令,具体可以参考图4A-图4C的相关描述。Wherein, the second operation may be a trigger operation to enable the attention focus function, or a voice command to enable the attention focus function. For details, please refer to the relevant descriptions in FIGS. 4A-4C .
在一种可能的实现方式中,所述电子设备基于所述目标图像确定用户的注意力点,具体包括:在基于所述目标图像检测到用户的注意力点时,所述电子设备确定检测到注意力点的持续时间;所述在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:在所述注意力点的持续时间不小于第一时长阈值时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面。这样,当用户的视线稳定到某一注意力点时,可以对这一注意力点进行对焦,从而可以提高预览画面的稳定性。In a possible implementation manner, the electronic device determining the user's attention point based on the target image specifically includes: when the user's attention point is detected based on the target image, the electronic device determines that the attention point is detected duration; when the user’s attention point satisfies the preset condition, using the user’s attention point as the focus point, adjust the focal length of the camera, and capture the first preview image through the camera , specifically comprising: when the duration of the attention point is not less than a first duration threshold, using the user's attention point as a focus point, adjusting the focal length of the camera, and capturing the first preview image through the camera. In this way, when the user's line of sight is stabilized to a certain attention point, the attention point can be focused, thereby improving the stability of the preview image.
其中,检测到注意力点的持续时间可以是用户针对某一注意力点的注视时长。具体描述可以参考图5A-图5C的相关描述。Wherein, the duration of detecting the attention point may be the duration of the user's fixation on a certain attention point. For specific descriptions, reference may be made to related descriptions in FIG. 5A-FIG. 5C.
在一种可能的实现方式中,所述电子设备基于所述目标图像确定用户的注意力点,具体包括:在基于所述目标图像检测到用户的注意力点时,所述电子设备确定检测到注意力点的持续时间,以及检测不到注意力点的间断时间;所述在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:在所述检测到注意力点的持续时间不小于第二时长阈值,且所述检测不到注意力点的间断时间小于第三时长阈值时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面。这样,当用户的视线不稳定时,比如,用户向其他地方瞟了一眼,又看回来时,预览画面的对焦点可以保持不变,从而可以提高画面的稳定性,以提高用户体验。In a possible implementation manner, the electronic device determining the user's attention point based on the target image specifically includes: when the user's attention point is detected based on the target image, the electronic device determines that the attention point is detected duration, and the intermittent time when no attention point is detected; when the user’s attention point satisfies the preset condition, the focus point of the user’s attention point is used as the focus point to adjust the focal length of the camera, by The camera capturing the first preview image specifically includes: when the duration of the detected attention point is not less than the second duration threshold, and the intermittent time for which the attention point is not detected is less than the third duration threshold, using The focus point of the user is the focus point, the focal length of the camera is adjusted, and the first preview image is collected by the camera. In this way, when the user's line of sight is unstable, for example, when the user glances at other places and then looks back, the focus point of the preview image can remain unchanged, thereby improving the stability of the image and improving user experience.
其中,所述检测到注意力点的持续时间可以为用户针对第一注意力点的注视时长,所述检测不到注意力点的间断时间可以为用户视线离开第一注意力点的时长。具体描述可以参考图5D的相关描述。Wherein, the duration of the detected attention point may be the duration of the user's gazing at the first attention point, and the intermittent time when the attention point is not detected may be the duration of the user's sight leaving the first attention point. For specific description, reference may be made to the related description of FIG. 5D .
在一种可能的实现方式中,所述电子设备基于所述目标图像确定用户的注意力点,具体包括:所述电子设备基于所述目标图像确定用户的注意力点,并检测用户动作;所述在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:所述检测到目标图像中用户动作为设定动作时,以所述用户的当前注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,所述设定动作包括瞳孔放大、“点头”和“OK”手势的一种或多种。这样,电子设备可以感知用户的动作,表明用户对当前的注意力的物体点极为关注,获得的图像更加符合用户的意图,并且无需用户手动或其他操作,提高了 用户体验。In a possible implementation manner, the electronic device determining the user's attention point based on the target image specifically includes: the electronic device determining the user's attention point based on the target image, and detecting a user action; When the user's attention point satisfies the preset condition, adjusting the focal length of the camera with the user's attention point as the focus point, and collecting the first preview image through the camera, specifically includes: the detecting When the user's action in the target image is a setting action, take the user's current attention point as the focus point, adjust the focal length of the camera, and collect the first preview image through the camera, and the setting action includes pupil One or more of zoom in, "nod" and "OK" gestures. In this way, the electronic device can perceive the user's actions, indicating that the user is paying close attention to the current attention object point, and the obtained image is more in line with the user's intention, and does not require manual or other operations by the user, which improves the user experience.
其中,目标图像可以包括用户的眼部,还可以包括用户的面部,还可以包括用户的身体部分。Wherein, the target image may include the user's eyes, may also include the user's face, and may also include the user's body parts.
在一种可能的实现方式中,所述电子设备基于所述目标图像确定用户的注意力点,具体包括:在第一时刻,所述电子设备基于所述目标图像确定检测到所述用户在第一注意力点的第一持续时间;所述在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:在检测到所述第一持续时间大于第四时长阈值时,以所述第一注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面;所述方法还包括:在第二时刻,所述电子设备基于所述目标图像确定检测到所述用户在第二注意力点的第二持续时间,所述第一注意力点与第二注意力点为不同位置的注意力点,所述第二时刻在第一时刻之后;在检测到所述第二持续时间大于第四时长阈值时,以所述第二注意力点为对焦点,再次调整所述摄像头的焦距,通过所述摄像头采集所述第二预览画面。这样,在用户的注意力点从一个位置转移到其相邻的位置上时,每个对焦位置每次只需调整较小的马达距离,实现了平滑对焦,在提高了图像的成像质量的同时节省了电子设备的资源消耗。In a possible implementation manner, the electronic device determining the user's attention point based on the target image specifically includes: at a first moment, the electronic device determines based on the target image that the user is in the first The first duration of the attention point; when the user's attention point satisfies the preset condition, the user's attention point is used as the focus point to adjust the focal length of the camera, and the camera captures the The first preview image specifically includes: when it is detected that the first duration is greater than the fourth duration threshold, using the first attention point as the focus point, adjusting the focal length of the camera, and collecting the first image through the camera A preview screen; the method further includes: at a second moment, the electronic device determines, based on the target image, a second duration for detecting that the user is at a second point of attention, and the first point of attention and the second point of attention are The point of attention is a point of attention at a different position, and the second moment is after the first moment; when it is detected that the second duration is greater than the fourth duration threshold, the second point of attention is used as the focus point, and the second point of attention is adjusted again. The focal length of the camera is used to collect the second preview image through the camera. In this way, when the user's attention is shifted from one position to its adjacent position, each focusing position only needs to adjust a small motor distance each time, achieving smooth focusing, and saving money while improving the imaging quality of the image. reduce the resource consumption of electronic devices.
其中,所述第一持续时间可以为用户针对第一注意力点的注视时长,所述第二持续时间可以为用户针对第二注意力点的注视时长。第一注意力点和第二注意力点用户的视线先后注意到的位置点,对应不同的景物。具体描述可以参考图6A-图6D的相关描述。Wherein, the first duration may be the duration of the user's gaze on the first attention point, and the second duration may be the duration of the user's gaze on the second attention point. The positions of the first attention point and the second attention point that the user's line of sight notices successively correspond to different scenes. For specific descriptions, reference may be made to the related descriptions in FIG. 6A-FIG. 6D.
在一种可能的实现方式中,所述电子设备基于所述目标图像确定用户的注意力点,具体包括:所述电子设备基于所述目标图像确定的所述用户的注意力点的位置;在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:在所述用户的注意力点的位置处于目标画面中的多个物体的交界位置的情况下,所述电子设备基于所述用户声音信息匹配所述目标画面中的物体;当所述电子设备匹配到第一对焦物时,以所述第一对焦物为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,所述第一对焦物为所述目标画面中的多个物体中的一个景物或一个人。这样,可以更加准确地确定用户关注的物体,并对这一物体进行对焦,提高对焦的准确性。In a possible implementation manner, the electronic device determines the user's attention point based on the target image, which specifically includes: the electronic device determines the position of the user's attention point based on the target image; When the user's attention point satisfies the preset condition, adjusting the focal length of the camera with the user's attention point as the focus point, and collecting the first preview image through the camera, specifically includes: When the position of the attention point is at the intersection of multiple objects in the target picture, the electronic device matches the objects in the target picture based on the user voice information; when the electronic device matches the first focus object , using the first focusing object as the focusing point, adjusting the focal length of the camera, and collecting the first preview image through the camera, the first focusing object being one of the multiple objects in the target image scenery or a person. In this way, the object that the user pays attention to can be determined more accurately, and the object can be focused on, thereby improving the accuracy of focusing.
其中,具体描述可以参考电子设备结合用户的语音信息进行对焦的实施方式的相关描述。Wherein, for specific descriptions, reference may be made to related descriptions of implementations in which an electronic device performs focusing in combination with voice information of a user.
在一种可能的实现方式中,所述电子设备基于所述目标图像确定用户的注意力点,具体包括:所述电子设备基于所述目标图像确定的所述用户的注意力点的位置,以及所述注意力点的位置物体的种类;在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:在所述注意力点处于目标画面中的多个种类的物体的交界位置的情况下,以上一次对焦物种类相同的对焦物为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第 一预览画面。这样,可以更加准确地确定用户关注的物体,并对这一物体进行对焦,提高对焦的准确性。In a possible implementation manner, the electronic device determines the user's attention point based on the target image, which specifically includes: the position of the user's attention point determined by the electronic device based on the target image, and the The location of the attention point and the type of object; when the user's attention point satisfies the preset condition, take the user's attention point as the focus point, adjust the focal length of the camera, and collect the first The preview image specifically includes: when the attention point is at the intersection of multiple types of objects in the target image, adjusting the focal length of the camera with the focus object of the same type as the previous focus object as the focus point, and using the The camera captures the first preview image. In this way, the object that the user pays attention to can be determined more accurately, and the object can be focused on, thereby improving the accuracy of focusing.
其中,具体描述可以参考电子设备基于上一次对焦的主体进行对焦的实施方式的相关描述。Wherein, for specific descriptions, reference may be made to related descriptions of implementations in which the electronic device focuses based on the last focused subject.
在一种可能的实现方式中,所述第一预览画面包括对焦框,所述注意力点为所述对焦框的中心位置。这样,可以保证预览画面中对焦位置的准确性。In a possible implementation manner, the first preview image includes a focus frame, and the attention point is a center position of the focus frame. In this way, the accuracy of the focus position in the preview image can be guaranteed.
在一种可能的实现方式中,所述第一预览画面包括对焦框,所述注意力点的拍摄主体的中心为所述对焦框的中心位置。这样,可以保证预览画面中对焦位置的准确性。In a possible implementation manner, the first preview image includes a focus frame, and the center of the subject of the attention point is the center of the focus frame. In this way, the accuracy of the focus position in the preview image can be guaranteed.
在一种可能的实现方式中,所述电子设备基于所述目标图像确定用户的注意力点包括:In a possible implementation manner, the electronic device determining the user's attention point based on the target image includes:
所述第一人的注意力点为P
g,表示为:
The attention point of the first person is P g , expressed as:
P
g=O
e+c·V
o+λ·V
g
P g =O e +c·V o +λ·V g
其中,λ为角膜中心Oc与注意力点Pg的模,表示为:Among them, λ is the modulus of the corneal center Oc and the attention point Pg, expressed as:
所述第一人的眼球结构参数中,包括:眼球中心O
e,Kappa角为实收和光轴的夹角,Kappa角的水平分量α,Kappa角垂直分量β,
R为旋转参数,t为平移参数,
为头部坐标系的坐标,V
s为电子设备的显示屏所在平面的单位法向量;
The eyeball structural parameters of the first person include: eyeball center O e , Kappa angle is the angle between the actual receiving angle and the optical axis, the horizontal component α of the Kappa angle, the vertical component β of the Kappa angle, R is the rotation parameter, t is the translation parameter, Be the coordinates of the head coordinate system, V s is the unit normal vector of the plane where the display screen of the electronic device is located;
光轴单位向量Vo的偏转角为
Vo为光轴单位向量,表示为:
The deflection angle of the optical axis unit vector Vo is Vo is the unit vector of the optical axis, expressed as:
Vg为视轴单位向量,表示为:Vg is the visual axis unit vector, expressed as:
第二方面,本申请实施例公开一种电子设备,包括:处理器、摄像头和触控屏。其中,所述处理器,用于响应于用户的第一操作,指示所述摄像头开始拍摄,指示所述触控屏显示第一界面,在所述第一界面显示通过摄像头采集到的预览画面;所述处理器,还用于指示所述触控屏在所述第一界面显示第一预览画面,所述第一预览画面为所述摄像头以满足预设条件的注意力点为对焦点采集的预览画面,所述注意力点为用户的视线落在所述第一界面的位置点。In a second aspect, the embodiment of the present application discloses an electronic device, including: a processor, a camera, and a touch screen. Wherein, the processor is configured to, in response to a first user operation, instruct the camera to start shooting, instruct the touch screen to display a first interface, and display a preview image collected by the camera on the first interface; The processor is further configured to instruct the touch screen to display a first preview picture on the first interface, and the first preview picture is a preview captured by the camera meeting the preset condition of the attention point as the focus point screen, the point of attention is the point where the user's line of sight falls on the first interface.
其中,第一操作可以为用户点击屏幕的相机应用,用于开启拍摄的操作。Wherein, the first operation may be an operation in which the user taps the camera application on the screen to start shooting.
在本申请实施例中,在用户握持电子设备进行拍照时,电子设备也可以基于获取到的 用户在预览画面中的注意力点的位置进行自动对焦,能够准确对焦到用户的注意力点,操作简便,可以提高用户体验。In the embodiment of the present application, when the user holds the electronic device to take pictures, the electronic device can also automatically focus based on the acquired position of the user's attention point in the preview screen, which can accurately focus on the user's attention point and is easy to operate , which can improve user experience.
在一种可能的实现方式中,所述处理器指示所述触控屏显示第一界面显示第一预览画面,具体包括:所述处理器,用于通过前置摄像头获取目标图像,所述目标图像包括用户的眼部的图像;所述处理器,还用于基于所述目标图像确定用户的注意力点;在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面;在所述第一界面显示所述第一预览画面。这样,电子设备通过前置摄像头实时获取用户的目标图像并确定注意力点的位置,提供了一个稳定的对焦点的输入源,电子设备能够连续对焦,从而持续获得用户的感兴趣区域清晰的图像,从而可以提高用户体验。In a possible implementation manner, the processor instructs the touch screen to display a first interface to display a first preview image, specifically including: the processor, configured to acquire an image of a target through a front camera, and the target The image includes an image of the user's eyes; the processor is further configured to determine the user's attention point based on the target image; when the user's attention point satisfies the preset condition, the user's attention point For focusing, adjust the focal length of the camera, collect the first preview picture through the camera; display the first preview picture on the first interface. In this way, the electronic device obtains the user's target image in real time through the front camera and determines the position of the attention point, providing a stable focus point input source, and the electronic device can continuously focus, thereby continuously obtaining clear images of the user's interest area. Thereby, user experience can be improved.
在一种可能的实现方式中,所述处理器,还用于检测到开启注意力点对焦功能的第二操作;所述处理器,还用于响应于所述第二操作,指示所述触控屏在所述第一界面显示所述第一预览画面。In a possible implementation manner, the processor is further configured to detect a second operation for enabling the point of attention focusing function; the processor is further configured to, in response to the second operation, indicate that the touch The screen displays the first preview image on the first interface.
其中,第二操作,可以是开启注意力对焦功能的触发操作,还可以是开启注意力对焦功能的语音指令,具体可以参考图4A-图4C的相关描述。Wherein, the second operation may be a trigger operation to enable the attention focus function, or a voice command to enable the attention focus function. For details, please refer to the relevant descriptions in FIGS. 4A-4C .
在一种可能的实现方式中,所述处理器基于所述目标图像确定用户的注意力点,具体包括:在基于所述目标图像检测到用户的注意力点时,所述电子设备确定检测到注意力点的持续时间;所述在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:在所述注意力点的持续时间不小于第一时长阈值时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面。这样,当用户的视线稳定到某一注意力点时,可以对这一注意力点进行对焦,从而可以提高预览画面的稳定性。In a possible implementation manner, the processor determining the user's attention point based on the target image specifically includes: when the user's attention point is detected based on the target image, the electronic device determines that the attention point is detected duration; when the user’s attention point satisfies the preset condition, using the user’s attention point as the focus point, adjust the focal length of the camera, and capture the first preview image through the camera , specifically comprising: when the duration of the attention point is not less than a first duration threshold, using the user's attention point as a focus point, adjusting the focal length of the camera, and capturing the first preview image through the camera. In this way, when the user's line of sight is stabilized to a certain attention point, the attention point can be focused, thereby improving the stability of the preview image.
其中,检测到注意力点的持续时间可以是用户针对某一注意力点的注视时长。具体描述可以参考图5A-图5C的相关描述。Wherein, the duration of detecting the attention point may be the duration of the user's fixation on a certain attention point. For specific descriptions, reference may be made to related descriptions in FIG. 5A-FIG. 5C.
在一种可能的实现方式中,所述处理器基于所述目标图像确定用户的注意力点,具体包括:在基于所述目标图像检测到用户的注意力点时,确定检测到注意力点的持续时间,以及检测不到注意力点的间断时间;所述在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:在所述检测到注意力点的持续时间不小于第二时长阈值,且所述检测不到注意力点的间断时间小于第三时长阈值时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面。这样,当用户的视线不稳定时,比如,用户向其他地方瞟了一眼,又看回来时,预览画面的对焦点可以保持不变,从而可以提高画面的稳定性,以提高用户体验。In a possible implementation manner, the processor determining the user's attention point based on the target image specifically includes: when the user's attention point is detected based on the target image, determining the duration of the detected attention point, And the intermittent time when the attention point cannot be detected; when the user’s attention point satisfies the preset condition, the focus point of the user’s attention point is used as the focus point to adjust the focal length of the camera, and the camera collects The first preview screen specifically includes: when the duration of the detected attention point is not less than the second duration threshold, and the intermittent time of the non-detection of the attention point is less than the third duration threshold, the user's The attention point is the focusing point, the focal length of the camera is adjusted, and the first preview image is captured by the camera. In this way, when the user's line of sight is unstable, for example, when the user glances at other places and then looks back, the focus point of the preview image can remain unchanged, thereby improving the stability of the image and improving user experience.
其中,所述检测到注意力点的持续时间可以为用户针对第一注意力点的注视时长,所 述检测不到注意力点的间断时间可以为用户视线离开第一注意力点的时长。具体描述可以参考图5D的相关描述。Wherein, the duration of the detected attention point may be the user's gaze duration for the first attention point, and the intermittent time for the non-detected attention point may be the duration of the user's line of sight leaving the first attention point. For specific description, reference may be made to the related description of FIG. 5D .
在一种可能的实现方式中,所述处理器基于所述目标图像确定用户的注意力点,具体包括:基于所述目标图像确定用户的注意力点,并检测用户动作;所述在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:所述检测到目标图像中用户动作为设定动作时,以所述用户的当前注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,所述设定动作包括瞳孔放大、“点头”和“OK”手势的一种或多种。这样,电子设备可以感知用户的动作,表明用户对当前的注意力点的物体极为关注,获得的图像更加符合用户的意图,并且无需用户手动或其他操作,提高了用户体验。In a possible implementation manner, the processor determining the user's attention point based on the target image specifically includes: determining the user's attention point based on the target image, and detecting a user action; When the point of attention satisfies the preset condition, adjusting the focal length of the camera with the point of attention of the user as the focus point, and collecting the first preview image through the camera, specifically includes: the detected target image When the user's action is a setting action, the focus of the camera is adjusted with the user's current attention point as the focus point, and the first preview image is collected through the camera. The setting action includes pupil dilation, "nodding" ” and “OK” gestures. In this way, the electronic device can sense the user's action, indicating that the user is paying close attention to the current attention point object, and the obtained image is more in line with the user's intention, and the user's manual or other operations are not required, which improves the user experience.
其中,目标图像可以包括用户的眼部,还可以包括用户的面部,还可以包括用户的身体部分。Wherein, the target image may include the user's eyes, may also include the user's face, and may also include the user's body parts.
在一种可能的实现方式中,所述处理器基于所述目标图像确定用户的注意力点,具体包括:在第一时刻,基于所述目标图像确定检测到所述用户在第一注意力点的第一持续时间;所述在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:在检测到所述第一持续时间大于第四时长阈值时,以所述第一注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面;所述处理器,还用于在第二时刻,基于所述目标图像确定检测到所述用户在第二注意力点的第二持续时间,所述第一注意力点与第二注意力点为不同位置的注意力点,所述第二时刻在第一时刻之后;在检测到所述第二持续时间大于第四时长阈值时,以所述第二注意力点为对焦点,再次调整所述摄像头的焦距,通过所述摄像头采集所述第二预览画面。这样,在用户的注意力点从一个位置转移到其相邻的位置上时,每个对焦位置每次只需调整较小的马达距离,实现了平滑对焦,在提高了图像的成像质量的同时节省了电子设备的资源消耗。In a possible implementation manner, the determining, by the processor, the user's attention point based on the target image specifically includes: at a first moment, determining based on the target image that the user is detected to be at the first attention point a duration; when the user's attention point satisfies the preset condition, using the user's attention point as the focus point, adjusting the focal length of the camera, and capturing the first preview image through the camera , specifically comprising: when it is detected that the first duration is greater than a fourth duration threshold, using the first attention point as a focus point, adjusting the focal length of the camera, and capturing the first preview image through the camera; The processor is further configured to determine, based on the target image, a second duration during which the user is detected at a second attention point at a second moment, and the first attention point and the second attention point are at different positions. Attention point, the second moment is after the first moment; when it is detected that the second duration is greater than the fourth duration threshold, the second attention point is used as the focus point, and the focal length of the camera is adjusted again, by The camera captures the second preview image. In this way, when the user's attention is shifted from one position to its adjacent position, each focusing position only needs to adjust a small motor distance each time, achieving smooth focusing, and saving money while improving the imaging quality of the image. reduce the resource consumption of electronic devices.
其中,所述第一持续时间可以为用户针对第一注意力点的注视时长,所述第二持续时间可以为用户针对第二注意力点的注视时长。第一注意力点和第二注意力点用户的视线先后注意到的位置点,对应不同的景物。具体描述可以参考图6A-图6D的相关描述。Wherein, the first duration may be the duration of the user's gaze on the first attention point, and the second duration may be the duration of the user's gaze on the second attention point. The positions of the first attention point and the second attention point that the user's line of sight notices successively correspond to different scenes. For specific descriptions, reference may be made to the related descriptions in FIG. 6A-FIG. 6D.
在一种可能的实现方式中,所述处理器基于所述目标图像确定用户的注意力点,具体包括:基于所述目标图像确定的所述用户的注意力点的位置;在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:在所述用户的注意力点的位置处于目标画面中的多个物体的交界位置的情况下,所述处理器基于所述用户声音信息匹配所述目标画面中的物体;当所述处理器匹配到第一对焦物时,以所述第一对焦物为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,所述第一对焦物为所述目标画面中的多个物体中的一个景物或一个人。这样,可以更加准确地确定用户关注的物体,并对这一物 体进行对焦,提高对焦的准确性。In a possible implementation manner, the processor determining the user's attention point based on the target image specifically includes: determining the position of the user's attention point based on the target image; When the preset condition is met, adjust the focal length of the camera with the user's attention point as the focus point, and collect the first preview image through the camera, specifically including: at the position of the user's attention point In the case of the intersection position of multiple objects in the target picture, the processor matches the objects in the target picture based on the user voice information; when the processor matches the first focus object, the The first focus object is the focus point, adjust the focal length of the camera, and collect the first preview picture through the camera, and the first focus object is a scene or a person among the multiple objects in the target picture . In this way, the object that the user pays attention to can be determined more accurately, and the object can be focused on, thereby improving the accuracy of focusing.
其中,具体描述可以参考电子设备结合用户的语音信息进行对焦的实施方式的相关描述。Wherein, for specific descriptions, reference may be made to related descriptions of implementations in which an electronic device performs focusing in combination with voice information of a user.
在一种可能的实现方式中,所述处理器于所述目标图像确定用户的注意力点,具体包括:基于所述目标图像确定的所述用户的注意力点的位置,以及所述注意力点的位置物体的种类;在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:在所述注意力点处于目标画面中的多个种类的物体的交界位置的情况下,以上一次对焦物种类相同的对焦物为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面。这样,可以更加准确地确定用户关注的物体,并对这一物体进行对焦,提高对焦的准确性。In a possible implementation manner, the processor determines the user's attention point on the target image, specifically including: the position of the user's attention point determined based on the target image, and the position of the attention point The type of object; when the user's attention point satisfies the preset condition, using the user's attention point as the focus point, adjusting the focal length of the camera, and collecting the first preview image through the camera, specifically Including: when the attention point is at the intersection of multiple types of objects in the target picture, using the focus object of the same type as the focus object last time as the focus point, adjusting the focal length of the camera, and collecting the information collected by the camera to the first preview screen. In this way, the object that the user pays attention to can be determined more accurately, and the object can be focused on, thereby improving the accuracy of focusing.
其中,具体描述可以参考电子设备基于上一次对焦的主体进行对焦的实施方式的相关描述。Wherein, for specific descriptions, reference may be made to related descriptions of implementations in which the electronic device focuses based on the last focused subject.
在一种可能的实现方式中,所述第一预览画面包括对焦框,所述注意力点为所述对焦框的中心位置。这样,可以保证预览画面中对焦位置的准确性。In a possible implementation manner, the first preview image includes a focus frame, and the attention point is a center position of the focus frame. In this way, the accuracy of the focus position in the preview image can be guaranteed.
在一种可能的实现方式中,所述第一预览画面包括对焦框,所述注意力点的拍摄主体的中心为所述对焦框的中心位置。这样,可以保证预览画面中对焦位置的准确性。In a possible implementation manner, the first preview image includes a focus frame, and the center of the subject of the attention point is the center of the focus frame. In this way, the accuracy of the focus position in the preview image can be guaranteed.
在一种可能的实现方式中,所述电子设备基于所述目标图像确定用户的注意力点包括:In a possible implementation manner, the electronic device determining the user's attention point based on the target image includes:
所述第一人的注意力点为P
g,表示为:
The attention point of the first person is P g , expressed as:
P
g=O
e+c·V
o+λ·V
g
P g =O e +c·V o +λ·V g
其中,λ为角膜中心O
c与注意力点P
g的模,表示为:
Among them, λ is the modulus of the corneal center O c and the attention point P g , expressed as:
所述第一人的眼球结构参数中,包括:眼球中心O
e,Kappa角为实收和光轴的夹角,Kappa角的水平分量α,Kappa角垂直分量β,
R为旋转参数,t为平移参数,
为头部坐标系的坐标,V
s为电子设备的显示屏所在平面的单位法向量;
The eyeball structural parameters of the first person include: eyeball center O e , Kappa angle is the angle between the actual receiving angle and the optical axis, the horizontal component α of the Kappa angle, the vertical component β of the Kappa angle, R is the rotation parameter, t is the translation parameter, Be the coordinates of the head coordinate system, V s is the unit normal vector of the plane where the display screen of the electronic device is located;
光轴单位向量V
o的偏转角为
V
o为光轴单位向量,表示为:
The deflection angle of the optical axis unit vector V o is V o is the unit vector of the optical axis, expressed as:
V
g为视轴单位向量,表示为:
V g is the visual axis unit vector, expressed as:
第三方面,本申请提供了一种电子设备,包括触控屏、摄像头、一个或多个处理器和一个或多个存储器。该一个或多个处理器与触控屏、摄像头、以及一个或多个存储器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得电子设备执行上述任一方面任一项可能的实现方式中的对焦的方法。In a third aspect, the present application provides an electronic device, including a touch screen, a camera, one or more processors, and one or more memories. The one or more processors are coupled with a touch screen, a camera, and one or more memories, and the one or more memories are used to store computer program codes. The computer program codes include computer instructions. When the one or more processors execute computer Instructing the electronic device to execute the focusing method in any possible implementation manner of any of the foregoing aspects.
第四方面,本申请提供了一种电子设备,包括:一个或多个功能模块。一个或多个功能模块用于执行上述任一方面任一项可能的实现方式中的对焦的方法。In a fourth aspect, the present application provides an electronic device, including: one or more functional modules. One or more functional modules are used to execute the focusing method in any possible implementation manner of any of the above aspects.
第五方面,本申请实施例提供了一种计算机存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得上述装置执行上述任一方面任一项可能的实现方式中的对焦的方法。In the fifth aspect, the embodiment of the present application provides a computer storage medium, including computer instructions, and when the computer instructions are run on the electronic device, the above-mentioned apparatus executes the focusing method in any possible implementation of any one of the above-mentioned aspects. .
第六方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行上述任一方面任一项可能的实现方式中的对焦的方法。In a sixth aspect, an embodiment of the present application provides a computer program product, which, when the computer program product is run on a computer, causes the computer to execute the focusing method in any possible implementation manner of any one of the above aspects.
图1A是本申请实施例提供的一种眼球结构模型的剖面示意图;FIG. 1A is a schematic cross-sectional view of an eyeball structure model provided in an embodiment of the present application;
图1B是本申请实施例提供的一种对焦方法的应用场景示意图;FIG. 1B is a schematic diagram of an application scenario of a focusing method provided by an embodiment of the present application;
图2A、图2B、图3、图4A-图4G、图5A-图5E、图6A-图6G、图7A-图7C、图8A、图8B是本申请实施例提供的一些用户界面的示意图;Fig. 2A, Fig. 2B, Fig. 3, Fig. 4A-Fig. 4G, Fig. 5A-Fig. 5E, Fig. 6A-Fig. ;
图9是本申请实施例提供的一种对焦方法的流程示意图;FIG. 9 is a schematic flowchart of a focusing method provided by an embodiment of the present application;
图10是本申请实施例提供的一种3D眼睛模型的示意图;Fig. 10 is a schematic diagram of a 3D eye model provided by an embodiment of the present application;
图11是本申请实施例提供的一种头部坐标系向世界坐标系转换的示意图;Fig. 11 is a schematic diagram of conversion from a head coordinate system to a world coordinate system provided by an embodiment of the present application;
图12是本申请实施例提供的一种视轴和光轴的关系示意图;Fig. 12 is a schematic diagram of the relationship between the visual axis and the optical axis provided by the embodiment of the present application;
图13A-图13C是本申请实施例提供的一些用户界面的示意图;13A-13C are schematic diagrams of some user interfaces provided by the embodiments of the present application;
图14是本申请实施例提供的一种电子设备100的结构示意图。FIG. 14 is a schematic structural diagram of an electronic device 100 provided by an embodiment of the present application.
本申请以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括复数表达形式,除非其上下文中明确地有相反指示。还应当理解,本申请中使用的术语“和/或”是指并包含一个或多个所列出项目的任何或所有可能组合。The terms used in the following embodiments of the present application are only for the purpose of describing specific embodiments, and are not intended to limit the present application. As used in the specification and appended claims of this application, the singular expressions "a", "an", "said", "above", "the" and "this" are intended to also Plural expressions are included unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in this application refers to and includes any and all possible combinations of one or more of the listed items.
下面首先对本申请实施例涉及的术语进行解释。The terms involved in the embodiments of the present application are firstly explained below.
(1)对焦,通过照相机对焦机构改变镜片和成像面(图像传感器)的距离,使被拍物成像清晰的过程就是对焦。(1) Focusing, the process of changing the distance between the lens and the imaging surface (image sensor) through the camera focusing mechanism to make the subject image clear is focusing.
(2)手机的自动对焦,自动对焦是利用物体光反射的原理,将反射的光被手机中的相机上的图像传感器(CCD或CMOS)接收,得到原始图像,通过对原始图像计算处理,带动电动对焦装置进行对焦的方式叫自动对焦。本质上是集成在手机ISP(图像信号处理器) 中的一套数据计算方法。当取景器捕捉到最原始的图像后,这些图像数据会被当作原始资料传送至ISP中,此时ISP便会对图像数据进行分析,得到需要调整镜片的距离,进而驱动音圈马达进行调整,使得图像清晰——这一过程反映在手机使用者眼中的,便是自动对焦过程。其中,在手机的自动对焦中,镜片被锁在音圈马达中,驱动音圈马达可以改变镜片的位置。(2) Auto focus of mobile phone, auto focus is to use the principle of object light reflection, the reflected light is received by the image sensor (CCD or CMOS) on the camera in the mobile phone, and the original image is obtained, and the original image is calculated and processed to drive The way the electric focusing device focuses is called autofocus. In essence, it is a set of data calculation methods integrated in the mobile phone ISP (Image Signal Processor). When the viewfinder captures the most original image, the image data will be sent to the ISP as the original data. At this time, the ISP will analyze the image data, get the distance of the lens that needs to be adjusted, and then drive the voice coil motor to adjust , making the image clear - this process is reflected in the eyes of mobile phone users, which is the autofocus process. Among them, in the automatic focusing of the mobile phone, the lens is locked in the voice coil motor, and the position of the lens can be changed by driving the voice coil motor.
其中,手机的自动对焦的实现方式包括三种:相位对焦、反差对焦和激光对焦,下面分别介绍三种自动对焦的方式。Among them, there are three ways to realize the auto focus of the mobile phone: phase focus, contrast focus and laser focus. The following three ways of auto focus are introduced respectively.
a、相位对焦,是在感光元件上先预留一些遮蔽像素点,专门用来进行相位检测,通过像素之间的距离及其变化等来确定镜片相对于焦平面的偏移量,从而根据该偏移量调整镜片位置实现对焦。相位对焦的原理是在感光元件(如图像传感器)上设置相差检测像素点,相差检测像素点为遮住左边一半或者右边一半的像素点,可以对场景中的物体进行光量等信息的检测。相差为左右两边的像素点接收到的光信号之间的相位差,电子设备通过相差检测像素点中左右两侧分别获得的图像计算相关值,得到一个对焦函数,使得相差和偏移量为一一对应的关系。a. Phase focusing is to reserve some shaded pixels on the photosensitive element, which are specially used for phase detection, and determine the offset of the lens relative to the focal plane through the distance between pixels and its changes, so that according to the The offset adjusts the lens position to achieve focus. The principle of phase focusing is to set phase difference detection pixels on the photosensitive element (such as an image sensor). The phase difference detection pixels cover the left half or the right half of the pixels, which can detect the amount of light and other information on objects in the scene. The phase difference is the phase difference between the optical signals received by the pixels on the left and right sides. The electronic device calculates the correlation value through the images obtained on the left and right sides of the phase difference detection pixel, and obtains a focusing function, so that the phase difference and offset are equal to One-to-one relationship.
b、反差对焦,是假设对焦成功后,相邻像素点的对比度最大,基于这种假设,在聚焦过程中确定一个对焦点,将该对焦点与相邻像素点的对比度进行判断,反复移动音圈马达后得到一个局部梯度最大值,完成对焦。b. Contrast focusing, it is assumed that after the focus is successful, the contrast of adjacent pixels is the largest. Based on this assumption, a focus point is determined during the focusing process, and the contrast between the focus point and the adjacent pixels is judged, and the sound is moved repeatedly. After turning the motor, a local gradient maximum is obtained, and the focus is completed.
c、激光对焦,电子设备通过红外激光传感器,向被拍摄的主体(对焦主体)发射红外激光,当激光到达对焦主体时就会原路返回,电子设备根据红外激光往返的时间计算电子设备距对焦主体的距离,进一步地,电子设备基于该距离驱动音圈马达调整镜片的位置。c. Laser focusing. The electronic device emits an infrared laser to the subject to be photographed (focusing subject) through an infrared laser sensor. When the laser reaches the focusing subject, it will return to the original path. The electronic device calculates the distance from the electronic device to the focus according to the round-trip time of the infrared laser. The distance of the main body, further, the electronic device drives the voice coil motor to adjust the position of the lens based on the distance.
(3)音圈马达,主要由线圈,磁铁组和弹片构成,线圈通过上下两个弹片固定在磁铁组内,当给线圈通电时,线圈会产生磁场,线圈磁场和磁石组相互作用,线圈会向上移动,而锁在线圈里的镜片便一起移动,当断电时,线圈在弹片弹力下返回,这样就实现了自动对焦功能。(3) The voice coil motor is mainly composed of a coil, a magnet group and shrapnel. The coil is fixed in the magnet group by the upper and lower shrapnel. When the coil is energized, the coil will generate a magnetic field. The coil magnetic field interacts with the magnet group, and the coil will Moving upward, the lens locked in the coil moves together. When the power is cut off, the coil returns under the elastic force of the shrapnel, thus realizing the autofocus function.
由于本申请实施例涉及应用眼球结构的相关参数,下面介绍一种眼球结构模型。Since the embodiment of the present application involves the application of relevant parameters of the eyeball structure, an eyeball structure model is introduced below.
(4)眼球结构模型(4) Eyeball structure model
图1A为本申请实施例提供的一种眼球结构模型的剖面示意图,如图1A所示,眼球包括角膜1、虹膜2、瞳孔3、晶状体4、视网膜5、角膜中心6、眼球中心7,其中:Figure 1A is a schematic cross-sectional view of an eyeball structure model provided in the embodiment of the present application. As shown in Figure 1A, the eyeball includes a cornea 1, an iris 2, a pupil 3, a lens 4, a retina 5, a cornea center 6, and an eyeball center 7, wherein :
角膜1是眼球前部的透明部分,是光线进入眼球的第一道关口,角膜1外表面中央3mm左右为球形弧面,称为光学区,周边曲率半径逐渐增大,呈非球面形。在本申请实施例所提供的一种眼球结构模型中,将角膜1假设为球形弧面。The cornea 1 is the transparent part of the front of the eyeball, and is the first pass through which light enters the eyeball. The center of the outer surface of the cornea 1 is about 3mm in a spherical arc, called the optical zone, and the radius of curvature around it gradually increases, showing an aspherical shape. In an eyeball structure model provided in the embodiment of the present application, the cornea 1 is assumed to be a spherical arc surface.
虹膜2是一圆盘状膜,中央有一孔称瞳孔3。如果光线过强,虹膜2内的括约肌收缩,则瞳孔3缩小;光线变弱,虹膜2的开大肌收缩,瞳孔3变大。The iris 2 is a disc-shaped membrane with a hole called the pupil 3 in the center. If the light is too strong, the sphincter muscle in the iris 2 contracts, and the pupil 3 shrinks; when the light becomes weak, the dilator muscle of the iris 2 contracts, and the pupil 3 becomes larger.
瞳孔3是动物或人眼睛内虹膜中心的小圆孔,为光线进入眼睛的通道。The pupil 3 is the small circular hole in the center of the iris in the animal or human eye, which is the passage for light to enter the eye.
视网膜5是眼球的感光部位,外界物体成像在视网膜5上。The retina 5 is the photosensitive part of the eyeball, and external objects are imaged on the retina 5 .
下面介绍本申请实施例涉及的应用场景。The application scenarios involved in the embodiments of the present application are introduced below.
请参阅图1B,图1B示出了本申请实施例提供的一种对焦方法涉及的应用场景的示意 图。示例性的,用户打开电子设备100的相机应用程序,使用摄像头拍摄场景20,电子设备100显示如图1B所示的界面100。如图1B所示,界面100中可以包括预览画面101、菜单栏102、相册103、拍摄控件104、转换摄像头控件105。其中,菜单栏102中可以包括光圈、夜景、人像、拍照、录像、专业、更多等选项,用户可以根据自己的需求选择拍照模式。当用户选择的是“拍照”模式时,并且,用户希望预览画面101中的汽车106的所在的区域(感兴趣区域)的图像更加清晰,可以选择汽车106为对焦点,当电子设备接收到用户的选择对焦点的操作后,响应于该操作,通过对焦系统基于预览画面101中选择的对焦点进行对焦,使得感兴趣区域中的图像更加清晰。Please refer to FIG. 1B . FIG. 1B shows a schematic diagram of an application scenario involved in a focusing method provided by an embodiment of the present application. Exemplarily, the user opens the camera application program of the electronic device 100, uses the camera to capture the scene 20, and the electronic device 100 displays the interface 100 as shown in FIG. 1B. As shown in FIG. 1B , the interface 100 may include a preview screen 101 , a menu bar 102 , an album 103 , a shooting control 104 , and a switching camera control 105 . Wherein, the menu bar 102 may include options such as aperture, night scene, portrait, photo, video, professional, and more, and the user may select a photo mode according to his or her needs. When the user selects the "photographing" mode, and the user wishes to have a clearer image of the area (region of interest) where the car 106 is located in the preview screen 101, the car 106 can be selected as the focus point. When the electronic device receives the user's After the operation of selecting the focus point, in response to the operation, the focusing system performs focusing based on the focus point selected in the preview image 101, so that the image in the region of interest is clearer.
确定对焦点的是电子设备对焦过程中重要的一个步骤,准确的对焦点的位置信息可以提高拍摄图像的质量,满足用户的需求。在现有技术中可以通过以下的方式进行对焦点的确定和基于对焦点对焦:Determining the focus point is an important step in the focusing process of the electronic device. Accurate position information of the focus point can improve the quality of captured images and meet the needs of users. In the prior art, the determination of the focus point and focusing based on the focus point can be performed in the following ways:
在现有技术的一些实现方式中,选择对焦点的操作可以是触摸操作。示例性的,如图2A所示,用户对预览画面中101的一个景物(如图中的汽车106)感兴趣,这时可以选择汽车107为对焦点,用户可以针对预览画面101中的汽车107所在的位置输入触摸操作。电子设备100接收到该触摸操作,响应于该操作,获取汽车107在预览画面中的位置信息。进一步地,电子设备100基于汽车107在预览画面中的位置信息进行对焦,获得汽车107所在区域的清晰的预览画面。上述一些实现方式,当用户单手握持电子设备100时,单手难以输入触摸操作,即无法在预览画面中选择对焦点,造成电子设备100无法准确对焦,从而预览画面101中的感兴趣区域中的图像不会更加清晰,这样,获取不到满足用户需求的图像,用户体验较差。在一些实施例中,电子设备还可以显示如图2B所示的界面200,如图2B所示,界面200中包括预览画面202,预览画面202中可以显示感兴趣区域201,其中,感兴趣区域框201可以是根据对焦点汽车106的位置确定的,可以用于指示用户的感兴趣的主体所在的区域,可以用如图2B所示的虚线框201,或其他形状的图形如矩形框、圆形框、三角形等形状来指示感兴趣区域。In some implementation manners of the prior art, the operation of selecting the focus point may be a touch operation. Exemplarily, as shown in FIG. 2A, the user is interested in a scene (such as the car 106 in the figure) in the preview screen 101, and at this time, the car 107 can be selected as the focus point, and the user can focus on the car 107 in the preview screen 101. Enter the touch operation where you are. The electronic device 100 receives the touch operation, and in response to the operation, obtains the position information of the car 107 in the preview screen. Further, the electronic device 100 focuses based on the location information of the car 107 in the preview picture, and obtains a clear preview picture of the area where the car 107 is located. In some of the above-mentioned implementation methods, when the user holds the electronic device 100 with one hand, it is difficult to input a touch operation with one hand, that is, it is impossible to select a focus point in the preview screen, which causes the electronic device 100 to be unable to focus accurately, so that the area of interest in the preview screen 101 The image in will not be clearer, so that the image that meets the user's needs cannot be obtained, and the user experience is poor. In some embodiments, the electronic device can also display an interface 200 as shown in FIG. 2B. As shown in FIG. 2B, the interface 200 includes a preview screen 202, and the preview screen 202 can display an area of interest 201, wherein the area of interest The frame 201 can be determined according to the position of the focus point car 106, and can be used to indicate the area where the subject of interest of the user is located. It can be a dotted frame 201 as shown in FIG. 2B, or other shapes such as a rectangular frame, a circle Boxes, triangles, etc. to indicate the area of interest.
在现有技术的另一些实现方式中,电子设备100可以根据获取到的预览画面中的特定的主体的特征,来识别出特定的主体,以该特定的主体在预览画面中的位置为对焦点的位置,进而基于该对焦点所在的位置进行对焦。例如人脸对焦,电子设备100可以识别出预览画面中的人脸,以人脸在预览画面中的位置为对焦点的位置,进而基于该人脸在预览画面中的位置进行对焦,获得预览画面中人脸所在的区域清晰的图像。然而,在大多数的不同的拍摄场景中,人们感兴趣的主体大不相同,电子设备100能识别的主体种类有限,而且,在一些场景中,电子设备100对预览画面识别出的主体也有可能不是用户感兴趣的主体,因此,这样的方式不能确定出对焦点,从而也不能满足用户对拍摄的图像的要求,即用户感兴趣的主体所在的区域的图像清晰。In other implementations of the prior art, the electronic device 100 may identify the specific subject according to the acquired features of the specific subject in the preview image, and use the position of the specific subject in the preview image as the focus point position, and then focus based on the position of the focus point. For example, face focus, the electronic device 100 can recognize the face in the preview image, take the position of the face in the preview image as the position of the focus point, and then focus based on the position of the face in the preview image to obtain the preview image A clear image of the area where the face is located in the middle. However, in most different shooting scenarios, the subjects people are interested in are very different, and the types of subjects that the electronic device 100 can identify are limited. Moreover, in some scenes, the subjects recognized by the electronic device 100 on the preview screen may also be It is not the subject that the user is interested in. Therefore, this method cannot determine the focus point, and thus cannot meet the user's requirement for the captured image, that is, the image of the area where the subject of interest to the user is located is clear.
针对以上现有技术的所未能解决的问题,本申请实施例提供了一种对焦方法,该方法包括:电子设备(例如,手机、平板电脑等)通过显示屏显示预览画面,该预览画面可以是电子设备通过后置摄像头或前置摄像头采集到的图像;电子设备可以检测到用户的第一操作;响应于第一操作,电子设备通过前置摄像头获取用户的目标图像;电子设备基于用户的目标图像,确定出用户在电子设备显示的预览画面上的注意力点所在的位置,该注意 力点所在的区域即为用户的感兴趣区域;电子设备基于该注意力点所在的位置,对采集到预览画面的摄像头进行对焦。这样,在用户单手握持电子设备进行拍照时,电子设备也可以基于获取到的用户在预览画面中的注意力点的位置进行对焦,操作简便。而且,电子设备通过前置摄像头实时获取用户的目标图像并确定注意力点的位置,提供了一个稳定的对焦点的输入源,电子设备能够连续对焦,从而持续获得用户的感兴趣区域清晰的图像。其中,第二操作用于触发注意力点对焦功能,目标图像为电子设备通过前置摄像头采集到的图像,可以为包括用户的面部、眼部的图像。Aiming at the above unresolved problems in the prior art, an embodiment of the present application provides a focusing method, which includes: an electronic device (such as a mobile phone, a tablet computer, etc.) displays a preview image through a display screen, and the preview image can be It is the image collected by the electronic device through the rear camera or the front camera; the electronic device can detect the user's first operation; in response to the first operation, the electronic device obtains the user's target image through the front camera; the electronic device based on the user's The target image determines the position of the user's attention point on the preview screen displayed by the electronic device, and the area where the attention point is located is the user's area of interest; the electronic device collects the preview screen based on the position of the attention point. camera to focus. In this way, when the user holds the electronic device with one hand to take pictures, the electronic device can also focus based on the acquired position of the user's attention point in the preview image, which is easy to operate. Moreover, the electronic device obtains the user's target image in real time through the front camera and determines the position of the attention point, providing a stable focus point input source, and the electronic device can continuously focus, thereby continuously obtaining clear images of the user's interest area. Wherein, the second operation is used to trigger the attention point focusing function, and the target image is an image collected by the electronic device through the front camera, and may be an image including the user's face and eyes.
下面结合附图介绍本申请实施例提供的一种对焦方法所涉及的用户界面。A user interface involved in a focusing method provided by an embodiment of the present application will be described below with reference to the accompanying drawings.
本申请实施例提供的一种对焦方法可以应用在图1B所示的拍摄场景中,该场景可以是使用电子设备拍摄照片或拍摄视频,以用户使用电子设备拍摄一张照片为例,来说明本申请实施例提供的一种对焦方法所涉及的用户界面,该方法可以由图1B中的电子设备100执行。其中,该电子设备包括图3中所示的前置摄像头30,后置摄像头(图3中未示出)。用户打开自己的电子设备,使得电子设备的显示屏显示电子设备的桌面,图3为本申请实施例提供的一种用户的电子设备的界面示意图,如图3所示,该示意图包括状态栏31和菜单栏32。状态栏31包括运营商、当前时间、当前地理位置及当地天气、网络状态、信号状态和电源电量。如图3所示,运营商为中国移动;当前时间为2月9日星期五08:08;当前地理位置为北京,北京天气为多云且温度为6摄氏度;网络状态为wifi网络;信号状态为满格信号,表示当前信号较强;电源电量中的黑色部分可以表示电子设备的剩余电量。菜单栏32包括至少一个应用程序的图标,每个应用程序的图标下方具有相应的应用程序的名称,例如:相机110、邮箱115、云共享116、备忘录117、设置118、图库119、电话120、短消息121和浏览器122。其中,应用程序的图标以及相应的应用程序的名称的位置可以根据用户的喜好进行调整,本申请实施例对此不作限定。A focusing method provided by the embodiment of the present application can be applied in the shooting scene shown in FIG. A user interface involved in a focusing method provided in an embodiment of the application may be executed by the electronic device 100 in FIG. 1B . Wherein, the electronic device includes a front camera 30 shown in FIG. 3 and a rear camera (not shown in FIG. 3 ). The user turns on his/her electronic device, so that the display screen of the electronic device displays the desktop of the electronic device. FIG. 3 is a schematic diagram of an interface of a user's electronic device provided in an embodiment of the present application. and menu bar 32 . The status column 31 includes the operator, current time, current geographic location and local weather, network status, signal status and power supply. As shown in Figure 3, the operator is China Mobile; the current time is 08:08, Friday, February 9; the current geographical location is Beijing, and the weather in Beijing is cloudy and the temperature is 6 degrees Celsius; the network status is wifi network; the signal status is full The grid signal indicates that the current signal is strong; the black part of the power supply can indicate the remaining power of the electronic device. The menu bar 32 includes icons of at least one application program, each application program has a corresponding application program name below the icon, for example: camera 110, mailbox 115, cloud sharing 116, memo 117, settings 118, gallery 119, phone 120, Short message 121 and browser 122. Wherein, the positions of the icon of the application program and the name of the corresponding application program can be adjusted according to the preference of the user, which is not limited in this embodiment of the present application.
需要说明的是,图3所示的电子设备的界面示意图为本申请实施例的示例性的展示,电子设备的界面示意图也可以为其他样式,本申请实施例对此不作限定。It should be noted that the schematic diagram of the interface of the electronic device shown in FIG. 3 is an exemplary display of the embodiment of the present application, and the schematic diagram of the interface of the electronic device may also be in other styles, which is not limited in the embodiment of the present application.
下面通过图4A-图4C来介绍电子设备触发基于注意力点进行对焦的操作。The operation of the electronic device triggering focusing based on the point of attention is introduced below through FIGS. 4A-4C .
在一些实施例中,如图3所示,用户可以针对菜单栏32中的相机110输入操作,电子设备接收到该操作,进一步响应于该操作,显示如图4A所示的界面,该界面是本申请实施例提供的一种电子设备的相机的界面示意图。如图4A所示,该界面包括相机菜单栏41、预览画面42、注意力点对焦控件401、相册40A、拍摄控件40B、转换摄像头控件40C、智慧视觉开关40D、人工智能(AI)拍摄开关40E、闪光灯开关40F、滤镜开关40G、和设置控件40H。其中:In some embodiments, as shown in FIG. 3, the user can input an operation for the camera 110 in the menu bar 32, and the electronic device receives the operation, and further responds to the operation, and displays an interface as shown in FIG. 4A, the interface is A schematic interface diagram of a camera of an electronic device provided in an embodiment of the present application. As shown in Figure 4A, the interface includes a camera menu bar 41, a preview screen 42, an attention focus control 401, an album 40A, a shooting control 40B, a conversion camera control 40C, a smart vision switch 40D, an artificial intelligence (AI) shooting switch 40E, A flash switch 40F, a filter switch 40G, and a setting control 40H. in:
相机菜单栏41,可以包括光圈、夜景、人像、拍照、录像、专业、更多等多种相机模式的选项,不同的相机模式可以实现不同的拍摄功能,相机菜单栏41中的“三角形”指向的相机模式用于指示初始的或用户选择的相机模式,如图4A中的402所示,“三角形”指向“拍照”,说明当前相机处于拍照模式。The camera menu bar 41 can include the options of multiple camera modes such as aperture, night scene, portrait, taking pictures, video recording, professional, and more. Different camera modes can realize different shooting functions. The "triangle" in the camera menu bar 41 points to The camera mode of is used to indicate the initial camera mode or the camera mode selected by the user. As shown in 402 in FIG. 4A, the "triangle" points to "photographing", indicating that the camera is currently in the camera mode.
预览画面42,为电子设备通过前置摄像头或后置摄像头实时采集到的图像。The preview image 42 is an image collected by the electronic device in real time through the front camera or the rear camera.
注意力点对焦控件401,可以用于触发基于注意力点进行对焦,注意力点为用户眼睛视线落在电子设备的显示屏上的位置,即用户眼睛视线落在预览画面42上的位置。The attention point focusing control 401 can be used to trigger focusing based on the attention point. The attention point is the position where the user's eye sight falls on the display screen of the electronic device, that is, the position where the user's eye sight falls on the preview image 42 .
相册40A,用于供用户查看已拍摄的图片和视频。The photo album 40A is used for the user to view the pictures and videos that have been taken.
拍摄控件40B,用于响应于用户的操作,使得电子设备拍摄图片或者视频。The shooting control 40B is configured to make the electronic device take pictures or videos in response to user operations.
转换摄像头控件40C,用于将采集图像的摄像头在前置摄像头和后置摄像头之间切换。The switching camera control 40C is used to switch the camera for capturing images between the front camera and the rear camera.
智慧视觉开关40D,用于开启或关闭智慧视觉,智慧视觉可以用于识物、购物、翻译、扫码。The smart vision switch 40D is used to turn on or off the smart vision. The smart vision can be used for object recognition, shopping, translation, and code scanning.
人工智能(AI)拍摄开关40E,用于开启或关闭AI拍摄。An artificial intelligence (AI) shooting switch 40E is used to turn on or off the AI shooting.
闪光灯开关40F,用于开启或关闭闪光灯。The flash switch 40F is used to turn on or turn off the flash.
滤镜开关40G,用于开启或关闭滤镜。The filter switch 40G is used to turn on or off the filter.
设置控件40H,用于设置采集图像时的各类参数。The setting control 40H is used to set various parameters when collecting images.
这时,触发基于注意力点进行对焦的操作可以是针对如图4A所示的注意力点对焦控件401输入的触摸(点击)操作。At this time, the operation to trigger focusing based on the attention point may be a touch (click) operation input on the attention point focus control 401 as shown in FIG. 4A .
在一些实施例中,如图3所示,用户可以针对菜单栏32中的相机110输入操作,电子设备接收到该操作,进一步响应于该操作,显示如图4B所示的界面,该界面是本申请实施例提供的另一种电子设备的相机的界面示意图。如图4B所示,该界面包括相机菜单栏43,预览画面44,相册40A、拍摄控件40B、转换摄像头控件40C、智慧视觉开关40D、人工智能(AI)拍摄开关40E、闪光灯开关40F、滤镜开关40G、和设置控件40H。关于相机菜单栏43、预览画面44和相关控件的说明可以参见上述图4A中的相关描述,此处不再赘述。如图4B中的403所示,“三角形”指向“拍照”,说明当前相机处于拍照模式。此时,用户希望选择相机菜单栏43中的“更多”,该选择相机菜单栏43中的“更多”的操作可以是针对相机菜单栏43输入滑动操作,该滑动操作具体可以为向图4B中相机菜单栏43处的箭头所指的方向进行滑动。可选地,上述选择相机菜单栏43中的“更多”的操作也可以是触摸操作。电子设备响应于选择相机菜单栏43中的“更多”的操作,显示如图4C所示的界面,该界面中包括相机菜单栏45和更多菜单栏46。其中:In some embodiments, as shown in FIG. 3 , the user can input an operation for the camera 110 in the menu bar 32, and the electronic device receives the operation, and further responds to the operation, and displays an interface as shown in FIG. 4B , the interface is A schematic diagram of an interface of a camera of another electronic device provided in an embodiment of the present application. As shown in Figure 4B, the interface includes a camera menu bar 43, a preview screen 44, an album 40A, a shooting control 40B, a conversion camera control 40C, a smart vision switch 40D, an artificial intelligence (AI) shooting switch 40E, a flash switch 40F, and a filter switch 40G, and setting control 40H. For the description of the camera menu bar 43 , the preview screen 44 and related controls, please refer to the related description in FIG. 4A above, which will not be repeated here. As shown at 403 in FIG. 4B , the "triangle" points to "photographing", indicating that the camera is currently in a photographing mode. At this time, the user wishes to select "More" in the camera menu bar 43. The operation of selecting "More" in the camera menu bar 43 may be to input a sliding operation for the camera menu bar 43. Specifically, the sliding operation may be to Slide in the direction indicated by the arrow on the camera menu bar 43 in 4B. Optionally, the above-mentioned operation of selecting "more" in the camera menu bar 43 may also be a touch operation. In response to the operation of selecting “More” in the camera menu bar 43 , the electronic device displays an interface as shown in FIG. 4C , which includes a camera menu bar 45 and a more menu bar 46 . in:
相机菜单栏45中的“三角形”指向“更多”,更多菜单栏46中可以包括短视频控件、专业录像控件、美肤控件、注意力点对焦控件405,3D动态全景控件、全景控件、HDR控件、超级夜景控件、延时摄影等。The "triangle" in the camera menu bar 45 points to "more", and the more menu bar 46 can include short video control, professional video control, skin beautification control, attention focus control 405, 3D dynamic panorama control, panorama control, HDR Controls, super night scene controls, time-lapse photography, etc.
注意力点对焦控件405的说明可以参见上述图4A中注意力点对焦控件401的相关描述,此处不再赘述。这时,触发基于注意力点进行对焦的操作可以是针对如图4C所示的注意力点对焦控件405输入的触摸(点击)操作。For the description of the attention point focus control 405, refer to the related description of the attention point focus control 401 in FIG. 4A above, which will not be repeated here. At this time, the operation to trigger focusing based on the attention point may be a touch (click) operation input on the attention point focus control 405 as shown in FIG. 4C .
在另一些实施例中,如图3所示,触发基于注意力点进行对焦的操作可以是针对菜单栏32中的相机110输入的触摸操作,即打开相机后,电子设备就开启基于注意力点进行对焦的过程。In some other embodiments, as shown in FIG. 3 , the operation to trigger focusing based on the attention point may be a touch operation input on the camera 110 in the menu bar 32, that is, after the camera is turned on, the electronic device starts focusing based on the attention point. the process of.
在又一些实施例中,触发基于注意力点进行对焦的操作不限于上述一些实施例中提供的方式,还可以是语音控制等操作。示例性的,以语音控制为例,用户在打开相机后,电子设备通过麦克风采集语音,可以识别语音中是否包括如“打开注意力点对焦”、“对焦”等语音控制信息,该语音控制信息用于触发基于注意力点进行对焦。In some other embodiments, the operation of triggering focusing based on the attention point is not limited to the methods provided in the above embodiments, and may also be operations such as voice control. Exemplarily, taking voice control as an example, after the user turns on the camera, the electronic device collects the voice through the microphone, and can recognize whether the voice includes voice control information such as "turn on the attention focus" and "focus". The focus is based on the point of attention in the trigger.
需要说明的是,以上实施例中所描述的触发基于注意力点进行对焦的操作仅为示例,还可以是其他的触发操作,本申请实施例对此不作限定。It should be noted that the operation of triggering focus based on attention points described in the above embodiments is only an example, and other trigger operations may also be used, which is not limited in this embodiment of the present application.
需要说明的是,图4A-图4C所示的电子设备的界面示意图为本申请实施例的示例性的展示,电子设备的界面示意图也可以为其他样式,本申请实施例对此不作限定。It should be noted that the schematic interface diagrams of the electronic device shown in FIG. 4A-FIG. 4C are exemplary representations of the embodiment of the present application, and the schematic interface diagrams of the electronic device may also be in other styles, which are not limited in the embodiment of the present application.
当检测到触发基于注意力点进行对焦的操作后,电子设备响应于该操作,通过前置摄像头获取用户的目标图像,以确定用户的视线落在电子设备显示的预览画面上的位置信息,即用户的注意力点的位置信息。注意力点的位置信息可以包括注意力点在预览画面所在的平面坐标系(即显示屏坐标系)上的坐标。其中,电子设备确定用户的注意力点的位置信息的过程可以参见下述图9中S901-S907的相关描述。When an operation that triggers focusing based on the attention point is detected, the electronic device responds to the operation by acquiring the target image of the user through the front camera to determine the position information of the user's gaze on the preview screen displayed by the electronic device, that is, the user location information of attention points. The location information of the attention point may include the coordinates of the attention point on the plane coordinate system (ie, the display screen coordinate system) where the preview image is located. Wherein, the process for the electronic device to determine the location information of the user's attention point may refer to the relevant description of S901-S907 in FIG. 9 below.
在一种可选的实现方式中,当电子设备检测到用户针对如图4A所示的界面中注意力点对焦控件401输入操作后,响应于该操作,可以显示如图4D所示的界面,注意力点对焦控件401由第一颜色(例如,灰色)变化为第二颜色(例如,黑色),表示已开启基于注意力点对焦功能。其中,指示开启或关闭基于注意力点对焦功能的注意力点对焦控件的显示形式不限于颜色的变化,也可以是不同透明度等显示形式。在一些实施方式中,电子设备响应于上述针对注意力点对焦控件输入的操作,显示如图4F所示的界面,该界面中包括提示信息,例如“请看向下面的圆点”,用于提示用户看向该界面中的标定点408,标定点408用于电子设备确定用户的人脸模型、眼球结构参数等,电子设备确定用户的人脸模型、眼球结构参数的具体过程说明可以参见下述图9中S903的相关描述。In an optional implementation, when the electronic device detects that the user has input an operation on the focus point control 401 in the interface shown in FIG. 4A , in response to the operation, the interface shown in FIG. 4D may be displayed. Note The force point focus control 401 changes from the first color (for example, gray) to the second color (for example, black), indicating that the attention point-based focus function has been turned on. Wherein, the display form of the attention-point focusing control indicating to enable or disable the attention-point-based focusing function is not limited to a color change, and may also be a display form of different transparency. In some implementations, the electronic device displays an interface as shown in FIG. 4F in response to the above-mentioned operation on the focus control input of the attention point, and the interface includes prompt information, such as "please look at the dot below", for prompting The user looks at the calibration point 408 in the interface. The calibration point 408 is used by the electronic device to determine the user's face model and eyeball structure parameters. The specific process description for the electronic device to determine the user's face model and eyeball structure parameters can be found in the following Related description of S903 in FIG. 9 .
在另一种可选的实现方式中,当电子设备检测到用户针对如图4C所示的界面中注意力点对焦控件405输入操作后,响应于该操作,还可以显示如图4E所示的界面,如图4E所示,该界面中包括用于标识基于注意力点对焦功能已经开启的图标406,和关闭基于注意力点对焦功能的控件407。用户在使用基于注意力点对焦功能后,可以针对控件407输入操作关闭该功能。在一些实施方式中,电子设备响应于上述针对注意力点对焦控件输入的操作,显示如图4G所示的界面,该界面中包括提示信息,例如“请看向下面的圆点”,用于提示用户看向该界面中的标定点409,标定点409用于电子设备确定用户的人脸模型、眼球结构参数等,电子设备确定用户的人脸模型、眼球结构参数的具体过程说明可以参见下述图9中S903的相关描述。In another optional implementation manner, when the electronic device detects that the user has input an operation on the attention focus control 405 in the interface as shown in FIG. 4C , in response to the operation, the interface as shown in FIG. 4E may also be displayed. , as shown in FIG. 4E , the interface includes an icon 406 for indicating that the focusing function based on attention points has been enabled, and a control 407 for disabling the focusing function based on attention points. After the user uses the focusing function based on the point of attention, the user may input an operation on the control 407 to close the function. In some implementations, the electronic device displays an interface as shown in FIG. 4G in response to the above-mentioned operation on the focus control input of the attention point, and the interface includes prompt information, such as "please look at the dot below", for prompting The user looks at the calibration point 409 in the interface. The calibration point 409 is used by the electronic device to determine the user's face model and eyeball structure parameters. The specific process description for the electronic device to determine the user's face model and eyeball structure parameters can be found in the following Related description of S903 in FIG. 9 .
需要说明的是,电子设备显示的预览画面可以是通过后置摄像头采集的图像,也可以是通过前置摄像头采集的图像。例如,用户打开相机使用电子设备的前置摄像头进行自拍,电子设备显示通过前置摄像头采集的预览画面,电子设备响应于触发基于注意力点对焦的操作,通过获取到的用户的目标图像确定用户的视线落在显示屏上的注意力点的位置信息,进一步根据注意力点的位置信息进行对焦。其中,电子设备获取用户的目标图像可以是从预览画面中获取的,也可以是直接从前置摄像头采集到的图像(即前置摄像头采集到的图像分为两路,一路用于通过显示屏显示为预览画面,另一路用于确定用户的注意力点的位置信息)。It should be noted that the preview image displayed by the electronic device may be an image collected by a rear camera, or an image collected by a front camera. For example, the user turns on the camera and uses the front camera of the electronic device to take a selfie, and the electronic device displays the preview screen captured by the front camera, and the electronic device responds to triggering an operation based on attention points, and determines the user's target image through the obtained user's target image. The position information of the attention point on the display screen is further used for focusing according to the position information of the attention point. Wherein, the target image of the user obtained by the electronic device may be obtained from the preview screen, or directly from the image collected by the front camera (that is, the image collected by the front camera is divided into two channels, and one channel is used to pass through the display screen. displayed as a preview screen, and another way is used to determine the position information of the user's attention point).
电子设备基于确定的用户的注意力点的位置信息进行对焦。当电子设备确定出显示屏 上的用户的注意力点的位置信息时,可以通过以下几种实现方式触发针对注意力点的对焦过程。需要说明的是,以预览画面为电子设备的后置摄像头采集到的图像为例来说明触发针对注意力点的对焦过程的几种实现方式。The electronic device performs focusing based on the determined location information of the user's attention point. When the electronic device determines the location information of the user's attention point on the display screen, the focusing process for the attention point can be triggered through the following implementation methods. It should be noted that several implementations of triggering the focusing process for the point of attention will be described by taking the preview image as an image collected by the rear camera of the electronic device as an example.
实现方式1、当用户针对一个注意力点的注视时长满足预设条件时,电子设备触发针对该注意力点的对焦过程。Implementation mode 1. When the user's gaze duration on an attention point satisfies a preset condition, the electronic device triggers a focusing process on the attention point.
注视时长为用户的视线从落在显示屏上的注意力点到离开该注意力点之间的一段时长。当检测到注视时间满足预设条件时,电子设备对采集预览画面的摄像头进行对焦。电子设备基于注意力点的位置信息,放置对焦框,进而根据对焦框中的相差数据执行对焦过程,该相差数据为对焦框中的相差。对焦过程包括电子设备根据对焦框中的相差数据确定镜片需要调整的距离和方向,驱动音圈马达调整摄像头中镜片的位置,即改变镜片与图像传感器之间的距离(像距),使得用户注意力点所在区域的图像清晰。其中,电子设备对通过前置摄像头实时获取到的用户的目标图像,确定每个时刻用户的视线落在预览画面上的注意力点的位置信息,并记录用户针对每个注意力点的注视时长。Gaze duration is the period of time between when the user's gaze falls on the attention point on the display screen and leaves the attention point. When it is detected that the gaze time satisfies the preset condition, the electronic device focuses on the camera that captures the preview image. The electronic device places the focusing frame based on the position information of the attention point, and then executes the focusing process according to the phase difference data in the focusing frame, where the phase difference data is the phase difference in the focusing frame. The focusing process includes the electronic device determining the distance and direction of the lens to be adjusted according to the phase difference data in the focusing frame, and driving the voice coil motor to adjust the position of the lens in the camera, that is, changing the distance (image distance) between the lens and the image sensor, so that the user pays attention The image of the area where the force point is located is clear. Among them, the electronic device determines the position information of the attention point where the user's line of sight falls on the preview screen at each moment from the user's target image acquired in real time through the front camera, and records the user's gaze duration for each attention point.
在一种可能的实现方式中,预设条件为针对注意力点的注视时长不小于第一时长阈值,当满足预设条件时,电子设备执行对焦过程,对焦过程可以参见下述图9中S906-S907的相关描述。具体的,当检测到用户的视线落在第一注意力点的时长不小于第一时长阈值时,电子设备基于第一注意力点的位置信息进行对焦。电子设备从检测到用户的视线落在第一注意力点时开始计时,直至电子设备确定出用户的注意力点发生改变,即确定出的注意力点的位置信息不为第一注意力点的位置信息,这时,电子设备重新开始计时。可选的,电子设备可以显示计时器,该计时器可以用于指示用户的视线落在注意力点的时长。In a possible implementation, the preset condition is that the fixation duration for the attention point is not less than the first duration threshold. When the preset condition is met, the electronic device performs a focusing process. For the focusing process, refer to S906- in FIG. 9 below. Related description of S907. Specifically, when it is detected that the duration of the user's line of sight falling on the first attention point is not less than a first duration threshold, the electronic device performs focusing based on the position information of the first attention point. The electronic device starts counting when it detects that the user's line of sight falls on the first attention point, until the electronic device determines that the user's attention point changes, that is, the determined position information of the attention point is not the position information of the first attention point. , the electronic device restarts timing. Optionally, the electronic device may display a timer, and the timer may be used to indicate the length of time the user's gaze falls on the attention point.
示例性的,电子设备执行对焦的预设条件为检测到用户对第一注意力点的注视时长不小于3s。电子设备确定出用户的第一注意力点的位置信息,如图5A所示,如用户的第一注意力点为预览画面上的位置501,此时,电子设备显示计时器502,用于显示用户的视线落在位置501的时长,如图5A的计时器502所示,计时为从0.0s开始。当电子设备检测到计时器中的数值变化为如图5B所示的503中显示的3.0s时,即电子设备确定出用户的视线落在第一注意力点的时长为3.0s,执行基于第一注意力点的位置信息的对焦过程。Exemplarily, the preset condition for the electronic device to perform focusing is to detect that the duration of the user's gaze on the first attention point is not less than 3s. The electronic device determines the position information of the user's first attention point, as shown in FIG. 5A, if the user's first attention point is the position 501 on the preview screen, at this time, the electronic device displays a timer 502 for displaying the user's The duration of the line of sight falling on the position 501, as shown by the timer 502 in FIG. 5A , starts from 0.0s. When the electronic device detects that the value in the timer changes to 3.0s as shown in 503 as shown in Figure 5B, that is, the electronic device determines that the user's line of sight falls on the first point of attention for 3.0s, and executes based on the first The focusing process of the location information of the attention point.
可选的,计时器也可以是倒计时计时器,电子设备确定出用户的第一注意力点的位置信息时,显示如图5B所示的倒计时计时器503,倒计时计时器显示3.0s。当电子设备检测到倒计时计时器的数值变化为如图5A所示的502显示的0.0s时,执行基于第一注意力点的位置信息的对焦过程。Optionally, the timer may also be a countdown timer. When the electronic device determines the position information of the user's first attention point, it displays a countdown timer 503 as shown in FIG. 5B , and the countdown timer displays 3.0s. When the electronic device detects that the value of the countdown timer changes to 0.0s as shown at 502 in FIG. 5A , it executes a focusing process based on the position information of the first attention point.
如图5C所示,电子设备根据确定到的用户的注意力点的位置信息,检测到用户的注意力点从图5A中的位置501变化为图5C中的位置504,电子设备显示的计时器505重新开始计时,记录用户的注意力点在位置504上的时长。As shown in FIG. 5C, the electronic device detects that the user’s attention point changes from position 501 in FIG. 5A to position 504 in FIG. Start timing, and record the duration of the user's focus on position 504 .
在另一种可能的实现方式中,预设条件为在用户针对第一注意力点的注视时长不小于第二时长阈值后,用户的视线离开第一注意力点的时长小于第三时长阈值时,电子设备执行基于第一注意力点的位置信息进行对焦。具体的,在用户针对第一注意力点的注视时长不小于第二时长阈值后的某一时刻,电子设备检测到用户的视线离开第一注意力点,且用户的视线离开第一注意力点第一时长后又落在第一注意力点上。这时,电子设备确定第一 时长是否小于第三时长阈值,当第一时长小于第三时长阈值时,电子设备不改变基于第一注意力点的位置信息的对焦信息,即不进行调整音圈马达进行对焦的过程;当第一时长不小于第三时长阈值时,且第一时长不小于第二时长阈值,电子设备执行基于第二注意力点的位置信息进行对焦的过程,其中,第二注意力点为第一时长之间用户的视线落在预览画面上的位置。应理解,在检测到用户离开第一注意力点的时候,电子设备根据获取到的用户的目标图像确定的用户的注意力点可以是在预览画面上(如上述的第二注意力点),也可以不在预览画面上,当电子设备检测到用户的注意力点不在预览画面上时,电子设备不进行基于注意力点的对焦过程。In another possible implementation, the preset condition is that after the user's gaze duration on the first attention point is not less than the second duration threshold, and the duration of the user's gaze away from the first attention point is less than the third duration threshold, the electronic The device performs focusing based on the location information of the first attention point. Specifically, at a certain moment after the user's gaze duration on the first attention point is not less than the second duration threshold, the electronic device detects that the user's gaze leaves the first attention point, and the user's gaze leaves the first attention point for a first duration. Then it falls on the first point of attention. At this time, the electronic device determines whether the first duration is less than the third duration threshold, and when the first duration is smaller than the third duration threshold, the electronic device does not change the focus information based on the position information of the first attention point, that is, does not adjust the voice coil motor The process of focusing; when the first duration is not less than the third duration threshold, and the first duration is not less than the second duration threshold, the electronic device performs a focusing process based on the position information of the second attention point, wherein the second attention point It is the position where the user's eyes fall on the preview screen during the first duration. It should be understood that when it is detected that the user leaves the first attention point, the user's attention point determined by the electronic device according to the acquired target image of the user may be on the preview screen (such as the above-mentioned second attention point) or not. On the preview screen, when the electronic device detects that the user's attention point is not on the preview screen, the electronic device does not perform a focusing process based on the attention point.
示例性的,设置第二时长阈值为3s,第三时长阈值为1s。电子设备检测到用户的注意力点在图5D中的位置501上,并且在位置501的注视时长不小于3s后的某一时刻(如图5D中的计时器502显示的时长为4.0s),电子设备检测到用户的注意力点变化到图5D中的位置506,且在针对位置506的注视时长为0.5s后,用户的注意力点又回到位置501。电子设备检测到用户的注意力点离开位置501的时长0.5s小于第三时长阈值1s,继续执行基于位置501的位置信息进行对焦。应理解,在电子设备检测到用户的注意力点在位置501的注视时长不小于3s后,就基于位置501的位置信息放置对焦框,并进行对焦,在检测到用户的实现离开位置501的时长小于第三时长阈值时,仍基于位置501的位置信息进行对焦,不改变对焦信息,因此电子设备无需再进行调整音圈马达的对焦过程,这样就减小了电子设备的资源消耗。Exemplarily, the second duration threshold is set to 3s, and the third duration threshold is set to 1s. The electronic device detects that the user's attention point is on position 501 in FIG. 5D , and at a certain moment after the gaze duration of position 501 is not less than 3s (as shown by timer 502 in FIG. 5D , the duration is 4.0s), the electronic device The device detects that the user's attention point changes to position 506 in FIG. 5D , and after 0.5s of gazing at position 506 , the user's attention point returns to position 501 . The electronic device detects that the user's attention point leaves the location 501 for 0.5s less than the third duration threshold 1s, and continues to perform focusing based on the location information of the location 501 . It should be understood that after the electronic device detects that the user's attention point is fixed at position 501 for a duration of not less than 3 seconds, it places a focus frame based on the position information of position 501 and focuses on it. At the third duration threshold, focusing is still performed based on the position information of the position 501 without changing the focusing information, so the electronic device does not need to adjust the focusing process of the voice coil motor, thus reducing resource consumption of the electronic device.
在又一种可能的实现方式中,预设条件为用户的视线离开第一注意力点,落在第二注意力点,且落在第二注意力点时用户的生理特征发生变化。具体的,在用户针对第一注意力点的注视时长不小于第三时长阈值时,电子设备基于第一注意力点的位置信息进行对焦。之后的某一时刻,电子设备检测到用户的视线离开第一注意力点,落在第二注意力点,此时,电子设备通过采集到的用户的目标图像确定出用户的眼部生理特征发生变化,例如,瞳孔放大,电子设备执行基于第二注意力点的位置信息进行对焦的过程。当电子设备没有检测到的用户的眼部生理特征发生变化,则不改变基于第一注意力点的位置信息的对焦信息,即不进行调整音圈马达进行对焦的过程。应理解,在一些实施方式中,电子设备检测到用户的生理特征发生变化时,可以针对此时用户的注意力点的位置信息放置对焦框,并进行对焦,无需判断注视时长是否满足条件。In yet another possible implementation manner, the preset condition is that the user's line of sight leaves the first attention point and falls on the second attention point, and the user's physiological characteristics change when falling on the second attention point. Specifically, when the user's gaze duration on the first attention point is not less than a third duration threshold, the electronic device focuses based on the location information of the first attention point. At a later moment, the electronic device detects that the user's line of sight leaves the first attention point and falls on the second attention point. At this time, the electronic device determines that the physiological characteristics of the user's eyes have changed through the collected target image of the user. For example, the pupil is dilated, and the electronic device performs a process of focusing based on the position information of the second attention point. When the electronic device does not detect that the user's physiological characteristics of the eyes change, the focusing information based on the position information of the first attention point is not changed, that is, the process of adjusting the voice coil motor for focusing is not performed. It should be understood that, in some implementations, when the electronic device detects a change in the user's physiological characteristics, it may place a focusing frame based on the position information of the user's attention point at this time, and perform focusing without judging whether the gaze duration satisfies the condition.
示例性的,设置第三时长阈值为3s,如图5E所示,电子设备检测到用户的注意力点在图5E中的位置501上,并且在位置501的注视时长不小于3s后的某一时刻(如图5E中的计时器502显示的时长为4.0s),电子设备检测到用户的注意力点变化到图5E中的位置504,而且,电子设备在识别到用户的目标图像中的瞳孔大小相较于注意力点在位置501时的瞳孔大小较大,即可判定瞳孔放大,这时,电子设备基于位置504的位置信息放置对焦框,并进行对焦。Exemplarily, the third duration threshold is set to 3s. As shown in FIG. 5E , the electronic device detects that the user's attention point is on position 501 in FIG. 5E , and at a certain moment after the fixation duration at position 501 is not less than 3s. (The duration shown by timer 502 in Figure 5E is 4.0s), the electronic device detects that the user's attention point changes to the position 504 in Figure 5E, and the electronic device recognizes that the pupil size in the target image of the user corresponds to Compared with the size of the pupil when the attention point is at position 501 , it can be determined that the pupil is dilated. At this time, the electronic device places a focusing frame based on the position information of position 504 and performs focusing.
人们的眼部生理特征,例如瞳孔大小,瞳孔大小的变化会反映某些心理活动,凡在出现强烈兴趣或追求动机时,人们的瞳孔就会迅速扩大。因此,当电子设备识别到用户的瞳孔放大时,此时的用户的注意力点所在的区域为用户的感兴趣区域,可以基于此时的用户的注意力点的位置信息放置对焦框并进行对焦,获得用户的感兴趣区域清晰的图像。这样, 获得的图像更加符合用户的意图,并且无需用户手动或其他操作,提高了用户体验。People's eye physiological characteristics, such as pupil size, changes in pupil size will reflect certain psychological activities. When there is a strong interest or pursuit of motivation, people's pupils will rapidly dilate. Therefore, when the electronic device recognizes that the user's pupils are dilated, the area where the user's attention point is at this time is the user's area of interest, and the focus frame can be placed and focused based on the position information of the user's attention point at this time, to obtain A clear image of the user's region of interest. In this way, the obtained image is more in line with the user's intention, and no manual or other operations by the user are required, thereby improving user experience.
需要说明的是,上述第一时长阈值、第二时长阈值、第三时长阈值可以为同一个值,也可以不为同一个值,本申请实施例对上述时长阈值的取值不作限定。It should be noted that the first duration threshold, the second duration threshold, and the third duration threshold may or may not be the same value, and this embodiment of the present application does not limit the value of the duration threshold.
在又一种可能的实现方式中,电子设备可以根据用户的注意力点的变化进行平滑对焦。具体的,当电子设备检测到用户针对一个注意力点的注视时长不小于第四时长阈值时,就执行基于该注意力点的位置信息放置对焦框,进而驱动音圈马达调整镜片与图像传感器之间的距离进行对焦的过程。例如当预览画面中有多个景物或人物时,用户可能对每个景物或人物都感兴趣,用户的注意力点将会从一个景物或人物依次向其相邻的其他景物或人物上转移,电子设备检测到用户的注意力点在第一景物或人物上的注视时长不小于第四时长阈值时,基于该注意力点的位置信息(第一景物或人物的位置信息)对焦,当检测到用户的注意力点转移到第一景物或人物相邻的第二景物或人物上,且在第二景物或人物上的注视时长不小于第四时长阈值时,就基于转移后的注意力点的位置信息(第二景物或人物的位置信息)进行对焦。在用户的注意力点从一个位置转移到其相邻的位置上时,电子设备驱动音圈马达调整镜片的位置,此时只需要将镜片移动较小的距离。In yet another possible implementation manner, the electronic device may perform smooth focusing according to a change of the user's attention point. Specifically, when the electronic device detects that the user's gazing time for a point of attention is not less than the fourth duration threshold, it executes placing the focusing frame based on the position information of the point of attention, and then drives the voice coil motor to adjust the distance between the lens and the image sensor. The process of focusing at a distance. For example, when there are multiple scenes or characters in the preview screen, the user may be interested in each scene or character, and the user's attention will shift from one scene or character to other adjacent scenes or characters. When the device detects that the user's attention point is fixed on the first scene or person for a duration not less than the fourth duration threshold, it will focus based on the location information of the attention point (the location information of the first scene or person), and when it detects the user's attention When the power point is transferred to the second scene or character adjacent to the first scene or character, and the gaze duration on the second scene or character is not less than the fourth duration threshold, the position information (second location information of the scene or person) to focus. When the user's attention is shifted from one position to its adjacent position, the electronic device drives the voice coil motor to adjust the position of the lens, and only a small distance is required to move the lens at this time.
示例性的,第四时长阈值为2s。如图6A所示,用户的注意力点首先在图6A中的人物601上,之后转移到人物601相邻的人物603上,最后又从人物603上转移到其相邻的人物605上。电子设备检测到用户的注意力点的变化过程如图6B-图6D所示,电子设备检测到用户的注意力点在人物601的注视时长为3.0s,如图6B中的计时器602所示,这时,电子设备基于人物601的位置信息确定调整镜片的距离为第一距离,进一步驱动音圈马达调整第一距离,将镜片移动到第一位置。当电子设备检测到用户的注意力点由人物601转移到其相邻的人物603上,显示如图6C所示的计时器604,计时器604显示注视时长为2.0s不小于第四时长阈值2s时,电子设备基于人物603的位置信息确定调整镜片的距离为第二距离,进一步驱动音圈马达调整第二距离,将镜片移动到第二位置。当电子设备检测到用户的注意力点由人物603转移到其相邻的人物605上,显示如图6D所示的计时器606,计时器606显示注视时长为3.0s不小于第四时长阈值2s时,电子设备基于人物605的位置信息确定调整镜片的距离为第三距离,进一步驱动音圈马达调整第三距离,将镜片移动到第三位置。其中,人物603与人物601相邻,镜片的第二位置与第一位置相距的第二距离较小,人物605与人物603相邻,镜片的第三位置与第二位置相距的第三距离较小,因此,电子设备在基于人物601所在位置对焦之后,依次对人物603、人物605的对焦,只需调整镜片第二距离、第三距离,而第二距离、第三距离较小,这样,每次只需调整较小的距离,实现了平滑对焦,在提高了图像的成像质量的同时节省了电子设备的资源消耗。Exemplarily, the fourth duration threshold is 2s. As shown in FIG. 6A , the user's attention is first on the character 601 in FIG. 6A , then shifts to the character 603 adjacent to the character 601 , and finally shifts from the character 603 to the adjacent character 605 . The electronic device detects the change process of the user's attention point as shown in Figures 6B-6D. The electronic device detects that the user's attention point is fixed on the character 601 for 3.0s, as shown by the timer 602 in Figure 6B. , the electronic device determines that the distance to adjust the lens is the first distance based on the position information of the person 601, and further drives the voice coil motor to adjust the first distance to move the lens to the first position. When the electronic device detects that the user's focus is shifted from the character 601 to its adjacent character 603, a timer 604 as shown in Figure 6C is displayed, and the timer 604 shows that the gaze duration is 2.0s and not less than the fourth duration threshold of 2s , the electronic device determines the distance to adjust the lens as the second distance based on the position information of the person 603, and further drives the voice coil motor to adjust the second distance to move the lens to the second position. When the electronic device detects that the user's focus is shifted from the character 603 to its adjacent character 605, a timer 606 as shown in Figure 6D is displayed, and the timer 606 shows that the gaze duration is 3.0s and not less than the fourth duration threshold of 2s , the electronic device determines, based on the position information of the person 605, that the distance to adjust the lens is a third distance, and further drives the voice coil motor to adjust the third distance, and moves the lens to a third position. Wherein, the person 603 is adjacent to the person 601, the second distance between the second position of the lens and the first position is relatively small, the person 605 is adjacent to the person 603, and the third distance between the third position of the lens and the second position is relatively small Small, therefore, after the electronic device focuses on the position of the person 601, it only needs to adjust the second distance and the third distance of the lens to focus on the person 603 and the person 605 in sequence, and the second distance and the third distance are relatively small. In this way, Only a small distance needs to be adjusted each time to achieve smooth focusing, which saves resource consumption of electronic equipment while improving the imaging quality of the image.
在一些实施例中,用户在使用连拍模式或录制视频时,电子设备检测用户的注意力点所在的位置,当检测到在连续帧的图像中注意力点向相邻的景物或人物依次转移时,电子设备执行平滑对焦过程,得到满足用户需求的多帧连拍照片或视频,连拍得到的每张图片或录制的视频的每帧图像都为基于其对应的注意力点进行对焦得到的。其中,用户可以长按如图6A中所示的拍摄控件40B开启连拍模式。在一种可选的实施方式中,当连拍照片或视频的前后两帧或多帧图像的注意力点所在区域的相差之间的差大于第一相差阈值时,可以采取分步对焦的方式。具体的,可以检测连拍照片或视频的前后两帧或多帧图像的注 意力点相距的距离,当该距离大于第一阈值(例如60μm)时,电子设备可以依次以第一步长(例如30μm)推动音圈马达使得镜片到达相对应的位置,实现多帧图像的平滑对焦,提高对焦效果。In some embodiments, when the user uses the continuous shooting mode or records a video, the electronic device detects the location of the user's attention point, and when it detects that the attention point shifts to adjacent scenes or characters in successive frames of images, The electronic device performs a smooth focusing process to obtain multi-frame continuous shooting photos or videos that meet the user's needs. Each picture obtained by continuous shooting or each frame of recorded video is obtained by focusing based on its corresponding attention point. Wherein, the user can long press the shooting control 40B as shown in FIG. 6A to start the continuous shooting mode. In an optional implementation manner, when the difference between the phase differences of the areas where the attention points are located in two or more frames of images before and after the continuous shooting of photos or videos is greater than the first phase difference threshold, a step-by-step focusing method may be adopted. Specifically, the distance between the attention points of two or more frames of images before and after the continuous shooting of photos or videos can be detected. When the distance is greater than the first threshold (for example, 60 μm), the electronic device can sequentially increase the distance by the first step (for example, 30 μm) ) Push the voice coil motor to make the lens reach the corresponding position, realize smooth focusing of multiple frames of images, and improve the focusing effect.
可选的,在一些实施例中,电子设备检测到用户的注意力点的所在位置后,以该注意力点的位置信息确定一个感兴趣区域框。在一些实施方式中,可以预设感兴趣区域框的大小,在确定出用户的注意力点的位置信息后,以注意力点为感兴趣区域框的几何中心,显示该感兴趣区域框。在另一些实施方式中,电子设备可以通过智能检测注意力点周围的图像,例如通过人脸识别,识别出人脸,并根据人脸的大小确定出人脸框,也称为感兴趣区域框,示例性的,如图6E、图6F和图6G所示,电子设备分别基于用户的三个时刻的注意力点的位置信息和人脸识别确定出三个感兴趣区域框607、608、609。Optionally, in some embodiments, after the electronic device detects the location of the user's attention point, it determines an ROI frame based on the location information of the attention point. In some implementations, the size of the ROI frame can be preset, and after the location information of the user's attention point is determined, the ROI frame is displayed with the attention point as the geometric center of the ROI frame. In some other implementations, the electronic device can intelligently detect the images around the attention point, for example, through face recognition, recognize the face, and determine the face frame according to the size of the face, also known as the region of interest frame, Exemplarily, as shown in FIG. 6E , FIG. 6F and FIG. 6G , the electronic device determines three ROI frames 607 , 608 , and 609 based on the location information of the user's attention points at three moments and face recognition, respectively.
实现方式2、当用户的面部、身体动作的匹配时,电子设备执行对焦过程。 Implementation mode 2. When the user's facial and body movements match, the electronic device performs a focusing process.
在一种可能的实现方式中,电子设备可以结合用户的生理特征信息进行对焦,该生理特征信息可以是用户的表情或动作,用于触发电子设备的对焦过程。具体的,电子设备可以通过前置摄像头获取用户的图像,电子设备可以判断用户的图像中脸部的表情、头部动作、肢体动作是否与预设的表情或动作指令匹配。当在用户的图像中识别到与预设的表情或指令匹配的生理特征信息时,电子设备基于注意力点的位置信息确定对焦框的位置,进而执行对焦过程。示例性的,当电子设备检测到用户“点头”、或“OK”手势等确认的动作时,或检测到用户的瞳孔放大时,电子设备进行对焦。其中,电子设备可以存储预设的表情或动作指令。In a possible implementation manner, the electronic device may perform focusing in combination with the user's physiological feature information, where the physiological feature information may be the user's expression or action, and is used to trigger the focusing process of the electronic device. Specifically, the electronic device can obtain the user's image through the front camera, and the electronic device can determine whether the facial expression, head movement, and body movement in the user's image match a preset expression or action command. When the physiological feature information matching the preset expression or instruction is identified in the user's image, the electronic device determines the position of the focus frame based on the position information of the attention point, and then performs the focusing process. Exemplarily, when the electronic device detects the confirmation action of the user such as "nodding" or "OK" gesture, or detects that the user's pupils dilate, the electronic device focuses. Wherein, the electronic device can store preset facial expressions or action instructions.
实现方式3、电子设备结合用户的语音信息进行对焦。Implementation mode 3. The electronic device performs focusing in combination with the voice information of the user.
在一种可能的实现方式中,电子设备可以结合用户的语音信息进行对焦。具体的,电子设备响应于上述触发基于注意力点对焦的操作,通过麦克风实时获取环境中的声音,并识别环境中的声音中的语音信息。电子设备确定出用户的注意力点的位置信息,又检测到用户的语音信息中包括预览画面中的一个景物或人物的信息(即第一对焦物)时,在注意力点所在的预设范围内的区域中识别上述预览画面中的一个景物或人物(也称为拍摄主体),进一步地,电子设备基于该拍摄主体的位置信息放置对焦框进行对焦。其中,注意力点所在的预设范围内的区域可以为以注意力点为中心的预设像素高、预设像素宽的一个范围内的区域。In a possible implementation manner, the electronic device may perform focusing in combination with voice information of the user. Specifically, the electronic device responds to the above operation of triggering focus based on the attention point, acquires the sound in the environment in real time through the microphone, and recognizes the voice information in the sound in the environment. When the electronic device determines the position information of the user's attention point, and detects that the user's voice information includes the information of a scene or person in the preview screen (that is, the first focus object), within the preset range where the attention point is located A scene or a person (also referred to as a subject) in the preview image is identified in the area, and further, the electronic device places a focus frame for focusing based on the position information of the subject. Wherein, the area within the preset range where the attention point is located may be an area within a range of preset pixel height and preset pixel width centered on the attention point.
示例性的,图7A为本申请实施例提供的一种电子设备的界面,如图7A所示,麦克风图标701用于开启或关闭麦克风,可以指示麦克风开启或关闭的状态,图7A中701表示麦克风为开启状态,图7B中的702表示麦克风为关闭状态。电子设备确定出用户的注意力点的位置为位置703,在麦克风开启的期间,电子设备检测到麦克风采集的声音中包括“汽车”的语音信息,这时,电子设备识别用户注意力点预设范围内区域中的图像,识别到汽车704,电子设备将对焦点确定为图7A中的汽车704,进一步地,基于汽车704的位置信息放置对焦框进行对焦。Exemplarily, FIG. 7A is an interface of an electronic device provided by the embodiment of the present application. As shown in FIG. 7A , the microphone icon 701 is used to turn on or off the microphone, which can indicate the status of the microphone being turned on or off. In FIG. 7A , 701 represents The microphone is in an on state, and 702 in FIG. 7B indicates that the microphone is in an off state. The electronic device determines the position of the user's attention point as position 703. During the period when the microphone is turned on, the electronic device detects that the sound collected by the microphone includes the voice information of "car". At this time, the electronic device recognizes that the user's attention point is within the preset range In the image in the area, the car 704 is recognized, and the electronic device determines the focus point as the car 704 in FIG. 7A , and further, places a focus frame based on the position information of the car 704 for focusing.
可选的,电子设备在用户的注意力点703周围识别到汽车704后,可以根据汽车704的大小显示感兴趣区域框,如图7C所示的705。Optionally, after the electronic device recognizes the car 704 around the user's attention point 703, it may display the ROI frame according to the size of the car 704, as shown in 705 in FIG. 7C.
实现方式4、电子设备基于上一次对焦的主体进行对焦。 Implementation manner 4. The electronic device performs focusing based on the last focused subject.
电子设备确定出用户注意力点的位置信息后,识别用户的注意力点所在位置(位置A)上的主体(景物或人物),进而基于位置A上的主体的位置信息进行对焦。在电子设备检测到用户的注意力点转移到另一个位置(位置B)时,电子设备识别位置B预设范围内的景物或人物,当识别到与位置A上的主体为同一种类时,记为位置B上的主体为拍摄主体,并确定拍摄主体的位置信息,进一步地,电子设备基于位置B上的主体的位置信息确定对焦框,进而执行对焦过程。其中,同一种类为具有相同的特征的景物或人物,种类例如人脸、建筑、道路、交通工具、植物、动物等,应理解,位置A的位置信息和位置A上的主体的位置信息可以相同,也可以有一定的偏差。After determining the location information of the user's attention point, the electronic device identifies the subject (scenery or person) at the location of the user's attention point (position A), and then focuses based on the location information of the subject at position A. When the electronic device detects that the user's attention is shifted to another position (position B), the electronic device recognizes the scene or person within the preset range of position B, and when it recognizes that the subject at position A is of the same type, it is recorded as The subject at position B is the subject, and position information of the subject is determined. Further, the electronic device determines a focus frame based on the position information of the subject at position B, and then performs a focusing process. Wherein, the same category refers to scenes or people with the same characteristics, such as human faces, buildings, roads, vehicles, plants, animals, etc. It should be understood that the location information of location A and the location information of the subject at location A may be the same , there may also be a certain deviation.
示例性的,如图8A-图8B所示,电子设备确定出用户的注意力点在如图8A中的位置801,识别位置801上的主体为人脸802,此时,电子设备基于人脸802的位置信息进行对焦。当电子设备检测到用户的注意力点转移到图8B中的位置803时,识别位置803周围是否包括和人脸802为同一种类的主体。电子设备在位置803的预设范围内识别到人脸804,进一步地,基于人脸804的位置信息进行对焦。Exemplarily, as shown in FIGS. 8A-8B , the electronic device determines that the user's attention point is at position 801 in FIG. 8A , and recognizes that the subject at position 801 is a human face 802 . location information to focus. When the electronic device detects that the user's attention is shifted to the position 803 in FIG. 8B , it identifies whether there are subjects of the same type as the face 802 around the position 803 . The electronic device recognizes the human face 804 within the preset range of the position 803 , and further, focuses based on the position information of the human face 804 .
需要说明的是,在一些实施例中,电子设备检测到上述实现方式1-实现方式4中的任意一种实现方式中所描述的情况时,就执行基于注意力点对焦的过程。It should be noted that, in some embodiments, when the electronic device detects the situation described in any one of the foregoing implementation manners 1 to 4, it executes a focus-based process.
下面结合附图详细介绍本申请实施例提供的一种对焦方法。本申请实施例提供的方法可以应用在如图1B所示的电子设备对焦的场景中,该方法可以由图1B中的电子设备100执行。如图9所示,该方法可以包括但不限于图9中所示的步骤:A focusing method provided by an embodiment of the present application will be described in detail below with reference to the accompanying drawings. The method provided in the embodiment of the present application may be applied in a scene where the electronic device focuses as shown in FIG. 1B , and the method may be executed by the electronic device 100 in FIG. 1B . As shown in Figure 9, the method may include but not limited to the steps shown in Figure 9:
S901、电子设备接收到用户针对触发基于注意力点对焦功能的第一操作。S901. The electronic device receives a user's first operation for triggering an attention-based focusing function.
第一操作可以为上文中所描述的触发基于注意力点进行对焦的操作,关于第一操作可以参考上文中的相关描述,此处不再赘述。The first operation may be the operation described above to trigger focusing based on the point of attention. For the first operation, reference may be made to the relevant description above, and details will not be repeated here.
S902、电子设备响应于上述第一操作,通过前置摄像头获取目标图像。S902. The electronic device acquires the target image through the front camera in response to the above first operation.
目标图像为电子设备通过前置摄像头采集到的图像,可以为包括用户的面部、眼部的图像。The target image is an image collected by the electronic device through a front-facing camera, and may be an image including a user's face and eyes.
在一些实施例中,在电子设备显示预览画面后,电子设备接收到第一操作,进而执行S902。In some embodiments, after the electronic device displays the preview image, the electronic device receives the first operation, and then performs S902.
在另一些实施例中,第一操作为针对上文图3中的相机110输入的触摸操作,电子设备接收到该第一操作,响应于该操作,显示通过默认摄像头采集到的预览画面,并通过前置摄像头获取目标图像。In some other embodiments, the first operation is a touch operation input to the camera 110 in FIG. 3 above, the electronic device receives the first operation, and in response to the operation, displays a preview image captured by the default camera, and Obtain the target image through the front camera.
S903-S905、电子设备基于目标图像确定用户在电子设备的显示屏上的注意力点的位置信息。S903-S905. The electronic device determines position information of the user's attention point on the display screen of the electronic device based on the target image.
在一种可能的实现方式中,电子设备通过图像处理的方法对获取到的目标图像进行处理得到图像参数和用户的眼睛结构参数,进一步地,电子设备应用得到的图像参数和眼睛结构参数用户在电子设备的显示屏上的注意力点的位置信息。In a possible implementation, the electronic device processes the acquired target image through an image processing method to obtain image parameters and eye structure parameters of the user. Further, the image parameters and eye structure parameters obtained by the electronic device are used by the user to The location information of the attention point on the display screen of the electronic device.
请参阅图10,图10为本申请实施例提供的一种3D眼睛模型的示意图。如图10所示,以电子设备的显示屏的几何中心为世界坐标系的原点,P
i为虹膜中心、O
c为角膜中心、O
e为眼球中心、V
o为光轴单位向量、V
g为视轴单位向量,定义眼睛的视轴为从角膜中心O
c 到电子设备的显示屏所在平面上的注意力点P
g的连线,光轴为眼球中心与角膜中心的连线。
Please refer to FIG. 10 , which is a schematic diagram of a 3D eye model provided by an embodiment of the present application. As shown in Figure 10, the geometric center of the display screen of the electronic device is the origin of the world coordinate system, P i is the center of the iris, O c is the center of the cornea, O e is the center of the eyeball, V o is the unit vector of the optical axis, V g is the visual axis unit vector, and the visual axis of the eye is defined as the line from the corneal center Oc to the attention point Pg on the plane where the display screen of the electronic device is located, and the optical axis is the line between the eyeball center and the corneal center.
在一种实现方式中,如图10所示,注意力点P
g可以表示为:
In one implementation, as shown in Figure 10, the attention point Pg can be expressed as:
P
g=O
e+c·V
o+λ·V
g(1)
P g =O e +c·V o +λ·V g (1)
其中,c为眼球中心O
e与角膜中心O
c的模,c=‖O
eO
c‖,c为固定值,通常为5.3mm;λ为角膜中心O
c与注意力点P
g的模,λ=‖O
eO
c‖,λ可以由以下公式得到:
Among them, c is the modulus of eyeball center O e and corneal center O c , c=‖O e O c ‖, c is a fixed value, usually 5.3mm; λ is the modulus of corneal center O c and attention point P g , λ =‖O e O c ‖, λ can be obtained by the following formula:
其中,V
s为电子设备的显示屏所在平面的单位法向量,对于电子设备的显示屏所在平面上的任一点P,都有P·V
s=-n,因此,对于注意力点P
g,也有P
g·V
s=-n,n可以通过电子设备的显示屏和前置摄像头的校准得到,与电子设备的显示屏和前置摄像头的位置关系有关,对于一个电子设备,n为固定值。
Among them, V s is the unit normal vector of the plane where the display screen of the electronic device is located, and for any point P on the plane where the display screen of the electronic device is located, there is P·V s =-n, therefore, for the attention point P g , there is also P g ·V s =-n, n can be obtained through calibration of the display screen of the electronic device and the front camera, and is related to the positional relationship between the display screen of the electronic device and the front camera. For an electronic device, n is a fixed value.
这样,可以基于用户的目标图像确定出眼球中心O
e、光轴单位向量V
o和视轴单位向量V
g,进而得到c、λ并代入公式(1)即可得到注意力点P
g的位置信息。
In this way, the eyeball center O e , the optical axis unit vector V o and the visual axis unit vector V g can be determined based on the user's target image, and then c and λ can be obtained and substituted into formula (1) to obtain the position information of the attention point P g .
下面通过步骤S903-S905具体地介绍电子设备通过目标图像确定用户在电子设备的显示屏上的注意力点的位置信息。The following describes in detail how the electronic device determines the position information of the user's attention point on the display screen of the electronic device through the target image through steps S903-S905.
S903、电子设备基于目标图像确定图像参数和用户的眼球结构参数。S903. The electronic device determines image parameters and eyeball structure parameters of the user based on the target image.
其中,图像参数包括用户的眼球中心在头部坐标系向电子设备所在的世界坐标系转换所需要的转换关系(R,t)和虹膜中心Pi的位置信息,上述转换关系(R,t)用于确定用户的眼球中心世界坐标系的位置信息,对于用户不同的头部姿态会有不同的转换关系,该转换关系可以包括旋转参数R和平移参数t,在本申请实施例中,旋转参数R可以为旋转矩阵,平移参数t可以为平移矩阵。Wherein, the image parameters include the conversion relationship (R, t) required for the conversion of the user's eyeball center in the head coordinate system to the world coordinate system where the electronic device is located and the position information of the iris center Pi. The above conversion relationship (R, t) is used To determine the position information of the world coordinate system of the user's eyeball center, there will be different conversion relationships for different head postures of the user. The conversion relationship may include the rotation parameter R and the translation parameter t. In the embodiment of the present application, the rotation parameter R It can be a rotation matrix, and the translation parameter t can be a translation matrix.
眼球结构参数包括眼球中心Oe的在头部坐标系的坐标
以及图10中Kappa角,Kappa角为实收和光轴的夹角,包括的水平分量α和垂直分量β。
The eyeball structure parameters include the coordinates of the eyeball center Oe in the head coordinate system And the Kappa angle in Figure 10, the Kappa angle is the angle between the actual receiving and the optical axis, including the horizontal component α and the vertical component β.
在一些具体的实施方式中,电子设备确定眼球结构参数
可以通过眼球中心的标定方法和Kappa角的标定方法实现。每个人有固定的眼球结构参数
电子设备标定眼球结构参数的过程可以包括是:电子设备建立用户的头部坐标系,当接收到触发注意力点对焦的操作时,电子设备响应于该操作,显示如图4G或图4G所示的界面,指示用户注视电子设备的显示屏上显示的一个标定点(如图4G中的408或图4G中的409)预设时长(例如1s),这时,电子设备可以获取用户的图像,并计算得到用户的眼球结构参数
In some specific embodiments, the electronic device determines the eyeball structural parameters It can be realized by the calibration method of the center of the eyeball and the calibration method of the Kappa angle. Everyone has fixed eyeball structure parameters The process for the electronic device to calibrate the eyeball structure parameters may include: the electronic device establishes the user's head coordinate system, and when receiving an operation that triggers the focus of attention, the electronic device responds to the operation by displaying the image shown in Figure 4G or Figure 4G interface, instructing the user to look at a calibration point (such as 408 in FIG. 4G or 409 in FIG. 4G ) displayed on the display screen of the electronic device for a preset duration (for example, 1s). At this time, the electronic device can acquire the image of the user, and Calculate the user's eyeball structure parameters
可选的,电子设备基于目标图像确定眼球结构参数
还可以通过多个标定点来实现,例如,电子设备依次显示不同位置的多个标定点,并指示用户依次注视每个标定点预设时长,电子设备通过获取到的用户注视各个标定点的图像确定用户的眼球结构参数
Optionally, the electronic device determines the structural parameters of the eyeball based on the target image It can also be realized through multiple calibration points. For example, the electronic device sequentially displays multiple calibration points at different positions, and instructs the user to stare at each calibration point for a preset duration. Determine the user's eye structure parameters
在一些具体的实施方式中,电子设备基于目标图像确定图像参数(R,t,Pi)可以包括以下过程:In some specific implementation manners, the electronic device determines the image parameters (R, t, Pi) based on the target image may include the following process:
S9031、电子设备应用图像处理中的人脸识别技术和眼睛识别技术分别识别出目标图像中的人脸和眼部。S9031. The electronic device uses face recognition technology and eye recognition technology in image processing to respectively recognize the face and eyes in the target image.
S9032、电子设备基于目标图像中的人脸确定头部坐标系和世界坐标系的转换关系(R,t)。S9032. The electronic device determines the conversion relationship (R, t) between the head coordinate system and the world coordinate system based on the face in the target image.
在一些实施例中,电子设备包括可以用于获取人脸的特征点的位置信息的传感器,例如,Kinect传感器,电子设备通过传感器检测t’时刻的人脸的特征点的位置信息,再参考人脸模型中的特征点的位置信息和人脸模型中的特征点从头部坐标系向世界坐标转换的转换关系,确定出t’时刻人脸的特征点向世界坐标系转换的旋转矩阵R(包括偏航角、俯仰角和滚转角)和平移矩阵t,即头部坐标系和世界坐标系的转换关系(R,t)。其中,人脸模型为保持人脸正对电子设备的显示屏一段时间,通过前置摄像头采集到的图像确定到的参考模型。在一些实施例中,人脸模型可以是通过人脸正对电子设备的显示屏上的标定点(如图4G中的408或图4G中的409),该过程可以与上述确定用户的眼球结构参数
同时执行。
In some embodiments, the electronic device includes a sensor that can be used to obtain the position information of the feature points of the face, for example, a Kinect sensor. The electronic device detects the position information of the feature points of the face at time t' through the sensor, and then refers to the The position information of the feature points in the face model and the transformation relationship of the feature points in the face model from the head coordinate system to the world coordinate conversion determine the rotation matrix R( Including yaw angle, pitch angle and roll angle) and translation matrix t, that is, the conversion relationship between the head coordinate system and the world coordinate system (R, t). Wherein, the human face model is a reference model determined by keeping the human face facing the display screen of the electronic device for a period of time, and using images collected by the front camera. In some embodiments, the human face model can be made by facing the calibration point on the display screen of the electronic device (such as 408 in Figure 4G or 409 in Figure 4G), and this process can be related to the above-mentioned determination of the user's eye structure parameter Execute at the same time.
S9033、电子设备基于目标图像中的眼部的图像确定虹膜中心P
i的坐标。
S9033. The electronic device determines the coordinates of the iris center P i based on the image of the eye in the target image.
在一些实施例中,电子设备可以应用图像梯度法确定虹膜中心的坐标。具体的,电子设备可以应用如下面的公式来确定虹膜中心P
i的坐标,
In some embodiments, the electronic device may determine the coordinates of the center of the iris using an image gradient method. Specifically, the electronic device may use the following formula to determine the coordinates of the iris center Pi ,
其中,h’为虹膜中心,即P
i,h为潜在的虹膜中心,d
i为位移向量,g
i为梯度向量,N为图像的像素点数。在一些实施方式中,电子设备可以将位移向量di和梯度向量gi被缩放为单位向量,以获得所有像素点的同等权重。
Among them, h' is the iris center, that is, P i , h is the potential iris center, d i is the displacement vector, g i is the gradient vector, and N is the number of pixels in the image. In some implementation manners, the electronic device may scale the displacement vector di and the gradient vector gi into unit vectors to obtain equal weights for all pixels.
d
i=(x
i-h)/‖x
i-h‖
d i =(x i -h)/‖x i -h‖
‖g
i‖=1
‖ g i ‖=1
具体的,电子设备确定眼部的图像中像素点x
i的梯度向量;确定x
i与潜在虹膜中心h的位移向量,每一个像素点都为潜在虹膜中心,即为确定像素点x
i与眼部的图像中每一个像素点的位移向量;确定像素点x
i的梯度向量和像素点x
i的所有位移向量的点积;确定像素点x
i点积的均值为x
i的梯度向量和所有位移向量的点积的均值;将点积的均值最大的像素点x
max作为虹膜中心,以像素点x
max的坐标为虹膜中心P
i的坐标。
Specifically, the electronic device determines the gradient vector of the pixel x i in the eye image; determines the displacement vector between x i and the potential iris center h, and each pixel is a potential iris center, that is, to determine the relationship between the pixel x i and the eye The displacement vector of each pixel in the image of the part; determine the dot product of the gradient vector of the pixel point x i and all the displacement vectors of the pixel point x i ; determine the mean value of the dot product of the pixel point x i to be the gradient vector of x i and all The mean value of the dot product of the displacement vector; the pixel point x max with the largest mean value of the dot product is taken as the iris center, and the coordinate of the pixel point x max is the coordinate of the iris center Pi .
S904、电子设备基于图像参数(R,t,P
i)和用户的眼球结构参数
确定眼球中心O
e的位置信息、光轴单位向量V
o和视轴单位向量V
g。
S904, the electronic device is based on the image parameters (R, t, P i ) and the user's eyeball structure parameters Determine the position information of the eyeball center O e , the optical axis unit vector V o and the visual axis unit vector V g .
如图11所示,头部坐标系由人头的生理构造决定,世界坐标系由电子设备决定,眼球中心在两个坐标系里存在如下转换:As shown in Figure 11, the head coordinate system is determined by the physiological structure of the human head, the world coordinate system is determined by electronic equipment, and the center of the eyeball has the following transformation in the two coordinate systems:
进一步地,已经得到虹膜中心P
i和眼球中心O
e,如图10所示,光轴单位向量V
o可以 根据下式确定:
Further, the iris center P i and the eyeball center O e have been obtained, as shown in Figure 10, the optical axis unit vector V o can be determined according to the following formula:
其中,r
e为眼球半径,通常在11-13mm之间,光轴单位向量V
o的三角函数表达式为:
Among them, r e is the radius of the eyeball, usually between 11-13mm, and the trigonometric function expression of the optical axis unit vector V o is:
如图12所示,光轴单位向量V
o旋转Kappa角得到视轴单位向量V
g:
As shown in Figure 12, the optical axis unit vector V o is rotated by the Kappa angle to obtain the visual axis unit vector V g :
其中,光轴角的水平分量和竖直分量Among them, the horizontal component and vertical component of the optical axis angle
S905、电子设备应用确定到的眼球中心O
e的位置信息、光轴单位向量V
o和视轴单位向量V
g确定用户在电子设备的显示屏上的注意力点P
g的位置信息。
S905. The electronic device determines the position information of the user's attention point P g on the display screen of the electronic device based on the determined position information of the eyeball center O e , the optical axis unit vector V o and the visual axis unit vector V g .
具体的,电子设备将确定到的O
e、V
o和V
g代入上述λ中,得到λ,进一步地,将确定到的O
e、V
o、V
g和λ代入上述公式(1)中,得到用户的视线在电子设备的显示屏上的注意力点P
g的三维坐标(世界坐标系)。
Specifically, the electronic device substitutes the determined O e , V o and V g into the above λ to obtain λ, and further, substitutes the determined O e , V o , V g and λ into the above formula (1), The three-dimensional coordinates (world coordinate system) of the attention point P g of the user's line of sight on the display screen of the electronic device are obtained.
在一些可选的实施方式中,电子设备可以分别确定用户两只眼睛的视线落在显示屏上的注意力点P
g左和P
g右,则用户的视线在电子设备的显示屏上的注意力点P
g=(P
g左+P
g右)/2,其中,电子设备确定P
g左和P
g右的过程可以参见上述S903-S905中的相关描述,此处不再赘述。
In some optional implementation manners, the electronic device can respectively determine the attention points P g left and P g right on the display screen where the eyes of the user's two eyes fall, then the attention points of the user's eyes on the display screen of the electronic device P g =(P g left +P g right )/2, wherein, the process of determining P g left and P g right by the electronic device can refer to the relevant descriptions in S903-S905 above, which will not be repeated here.
应理解,上述确定注意力点的位置信息的实施方式仅为示例,还可以有其他的实现方式,例如深度学习等。It should be understood that the above implementation manner of determining the position information of the attention point is only an example, and there may also be other implementation manners, such as deep learning and the like.
S906-S907、电子设备基于注意力点的位置信息进行对焦。S906-S907. The electronic device focuses based on the location information of the attention point.
电子设备首先基于注意力点的位置信息确定对焦框,进而利用对焦框中的相差数据进行对焦。The electronic device first determines the focus frame based on the position information of the attention point, and then uses the phase difference data in the focus frame to focus.
S906、电子设备基于在显示屏上的注意力点的位置信息确定对焦框。S906. The electronic device determines the focus frame based on the position information of the attention point on the display screen.
电子设备可以预设对焦框的大小和数量,每个对焦框为相同的大小,当确定出注意力点的位置信息后,可以基于注意力点为中心确定多个对焦框的位置,进一步地,将各个对焦框放置在其分别对应的位置上。可选的,电子设备可以显示各个对焦框。The electronic device can preset the size and number of focus frames. Each focus frame is the same size. After determining the location information of the attention point, the positions of multiple focus frames can be determined based on the attention point. Further, each The focusing frames are placed on their respective corresponding positions. Optionally, the electronic device may display each focusing frame.
具体的,电子设备可以以确定到的注意力点为中心确定对焦框的位置,也可以以基于注意力点的确定的拍摄主体的位置为中心确定对焦框的位置,下面通过实现方式A和实现方式B分别介绍两种确定对焦框的位置的过程。Specifically, the electronic device may determine the position of the focus frame centered on the determined attention point, or may determine the position of the focus frame centered on the position of the subject determined based on the attention point. The following implements A and B. Two processes for determining the position of the focus frame are introduced respectively.
实现方式A、电子设备可以以确定到的注意力点为中心确定对焦框的位置。Implementation manner A. The electronic device may determine the position of the focusing frame centering on the determined attention point.
在一些实施例中,当确定出注意力点的位置信息后,电子设备检测到用户针对注意力 点的注视时长满足上述实现方式1中所描述的预设条件时,以注意力点为中心,确定对焦框的位置。其中,实现方式1中所描述预设条件的相关描述可参见上文中的说明,此处不再赘述。In some embodiments, after the location information of the attention point is determined, the electronic device detects that the user's gaze duration on the attention point satisfies the preset condition described in the above implementation 1, and determines the focus frame with the attention point as the center s position. For the relevant description of the preset conditions described in Implementation Mode 1, refer to the description above, which will not be repeated here.
在另一些实施例中,当确定出注意力点的位置信息后,电子设备可以首先判断用户的生理特征信息是否与预设的表情或动作指令匹配,当预设的表情或动作指令匹配时,电子设备以注意力点为中心,确定对焦框的放置位置。In other embodiments, after determining the location information of the attention point, the electronic device may first determine whether the user's physiological feature information matches the preset expression or action instruction, and when the preset expression or action instruction matches, the electronic device The device takes the attention point as the center to determine the placement position of the focusing frame.
示例性的,电子设备可以预设5个相同如图13A所示的对焦框1301。如图13B所示,电子设备确定的注意力点的位置为位置1306,进一步地,电子设备以位置1306为中心基于注意力点确定如图13C所示的5个对焦框1301、1302、1303、1304、1305和1306。Exemplarily, the electronic device may preset five focusing frames 1301 as shown in FIG. 13A . As shown in FIG. 13B, the position of the attention point determined by the electronic device is position 1306. Further, the electronic device determines five focus frames 1301, 1302, 1303, 1304, 1305 and 1306.
实现方式B、电子设备以基于注意力点的确定的拍摄主体的位置为中心确定对焦框的位置。Implementation manner B. The electronic device determines the position of the focus frame centering on the position of the subject determined based on the attention point.
在一些实施例中,当确定出注意力点的位置信息后,电子设备可以结合用户的语音信息确定对焦框的放置位置。具体的,电子设备响应于上述触发基于注意力点对焦的操作,通过麦克风实时获取环境中的声音,并识别环境中的声音中的语音信息。电子设备确定出用户的注意力点的位置信息,又检测到用户的语音信息中包括预览画面中的一个景物或人物的信息时,在注意力点所在的预设范围内的区域中识别上述预览画面中的一个景物或人物(也称为拍摄主体),进一步地,电子设备以拍摄主体的位置为中心,确定对焦框的位置。其中,注意力点所在的预设范围内的区域可以为以注意力点为中心的预设像素高、预设像素宽的一个范围内的区域。相关电子设备结合用户的语音信息确定主体的位置信息的示例性的过程说明可以参见上述实现方式2中图7A-图7C的描述,此处不再赘述。In some embodiments, after the location information of the attention point is determined, the electronic device may determine the location of the focusing frame in combination with the user's voice information. Specifically, the electronic device responds to the above operation of triggering focus based on the attention point, acquires the sound in the environment in real time through the microphone, and recognizes the voice information in the sound in the environment. When the electronic device determines the position information of the user's attention point, and detects that the user's voice information includes information about a scene or person in the preview screen, it recognizes the location of the above-mentioned preview screen in the area within the preset range where the attention point is located. A scene or a person (also referred to as a subject), furthermore, the electronic device determines the position of the focus frame with the position of the subject as the center. Wherein, the area within the preset range where the attention point is located may be an area within a range of preset pixel height and preset pixel width centered on the attention point. For an exemplary process description of determining the subject's location information by the relevant electronic device in combination with the user's voice information, refer to the descriptions in FIG. 7A-FIG.
在另一些实施例中,电子设备确定出用户注意力点的位置信息后,识别用户的注意力点所在位置(位置A)上的主体(景物或人物),进而基于位置A上的主体的位置信息进行对焦。在电子设备检测到用户的注意力点转移到另一个位置(位置B)时,电子设备识别位置B预设范围内的景物或人物,当识别到与位置A上的主体为同一种类时,记为位置B上的主体为拍摄主体,并确定拍摄主体的位置信息,进一步地,电子设备以拍摄主体的位置为中心,确定对焦框,进而执行对焦过程。其中,同一种类为具有相同的特征的景物或人物,种类例如人脸、建筑、道路、交通工具、植物、动物等,应理解,位置A的位置信息和位置A上的主体的位置信息可以相同,也可以有一定的偏差。相关电子设备基于上一次对焦的主体进行对焦的示例性的过程说明可以参见上述实现方式4中图8A-图8B的描述,此处不再赘述。In some other embodiments, after the electronic device determines the position information of the user's attention point, it identifies the subject (scenery or person) at the position of the user's attention point (position A), and then proceeds based on the position information of the subject at position A. focus. When the electronic device detects that the user's attention is shifted to another position (position B), the electronic device recognizes the scene or person within the preset range of position B, and when it recognizes that the subject at position A is of the same type, it is recorded as The subject at position B is the subject, and the position information of the subject is determined. Further, the electronic device determines the focus frame with the position of the subject as the center, and then executes the focusing process. Wherein, the same category refers to scenes or people with the same characteristics, such as human faces, buildings, roads, vehicles, plants, animals, etc. It should be understood that the location information of location A and the location information of the subject at location A may be the same , there may also be a certain deviation. For an exemplary process description of the related electronic device focusing based on the last focused subject, reference may be made to the descriptions of FIGS. 8A-8B in the above-mentioned implementation mode 4, which will not be repeated here.
示例性的,电子设备可以预设5个相同如图13A所示的对焦框1301。如图13B所示,电子设备基于注意力点的位置确定的拍摄主体的位置为位置1306,进一步地,电子设备以位置1306为中心基于注意力点确定如图13C所示的5个对焦框1301、1302、1303、1304、1305和1306。Exemplarily, the electronic device may preset five focusing frames 1301 as shown in FIG. 13A . As shown in FIG. 13B, the position of the subject determined by the electronic device based on the position of the attention point is position 1306. Further, the electronic device centers on the position 1306 and determines five focus frames 1301, 1302 as shown in FIG. 13C based on the attention point. , 1303, 1304, 1305 and 1306.
应理解,预设对焦框的数量可以为上述示例中的5个,也可以是其他,例如9个,本申请实施例对此不作限定。电子设备放置对焦框的方式可以不限于上述图13C中所示的十字形,也可以是回字型、九宫格型等,本申请实施例对对焦框的放置方式及位置不作限定。It should be understood that the number of preset focus frames may be 5 in the above example, or may be other, such as 9, which is not limited in this embodiment of the present application. The way the electronic device places the focusing frame is not limited to the cross shape shown in FIG. 13C above, and may also be a back shape, a nine-square grid, etc. The embodiment of the present application does not limit the placement method and position of the focusing frame.
应理解,在一些实施例中,对焦框的位置和大小也可以基于上述图6E-图6G、图7C、 图8A和图8B中基于注意力点的位置信息确定的感兴趣区域框来确定,例如,对焦框的大小为感兴趣区域框,位置以感兴趣区域框的中心放置。其中,感兴趣区域框的确定可以参见上述图6E-图6G、图7C、图8A和图8B中的相关描述。It should be understood that, in some embodiments, the position and size of the focus frame can also be determined based on the region of interest frame determined based on the position information of the attention point in FIGS. 6E-6G, 7C, 8A and 8B above, for example , the size of the focusing frame is the ROI box, and the position is placed at the center of the ROI box. Wherein, for the determination of the frame of the region of interest, reference may be made to the relevant descriptions in FIGS. 6E-6G , 7C, 8A and 8B above.
S907、电子设备利用对焦框中的相差数据进行对焦。S907. The electronic device focuses using the phase difference data in the focusing frame.
在一些实施例中,电子设备应用相位对焦的方式进行对焦。具体的,电子设备通过带有相差检测像素点的图像传感器获得对焦框中的图像的相差数据(例如相差均值),然后在查找表中查找该相差数据对应的目标偏移量,进一步地,电子设备驱动音圈马达移动目标偏移量以调整镜片的位置,实现对焦。其中,偏移量包括镜片与焦平面的距离和方向,查找表中包括多个相差数据和其分别对应的偏移量,该查找表可以通过固定图卡校准获得,即针对一个固定图卡,移动镜片,即改变偏移量,计算每个偏移量对应的相差,记录下来作为查找表。In some embodiments, the electronic device uses phase focusing to focus. Specifically, the electronic device obtains the phase difference data (such as the mean value of the phase difference) of the image in the focus frame through an image sensor with phase difference detection pixels, and then searches the look-up table for the target offset corresponding to the phase difference data, further, the electronic device The device drives the voice coil motor to move the target offset to adjust the position of the lens to achieve focus. Wherein, the offset includes the distance and direction between the lens and the focal plane, and the look-up table includes a plurality of phase difference data and their corresponding offsets. The look-up table can be obtained through calibration of a fixed chart, that is, for a fixed chart, Move the lens, that is, change the offset, calculate the phase difference corresponding to each offset, and record it as a lookup table.
在利用相位对焦的一种实现方式中,电子设备确定多个对焦框中的相差的均值为目标相差数据,进一步地,电子设备查找目标相差数据对应的目标偏移量,从而驱动音圈马达移动目标偏移量以调整镜片的位置,实现对注意力点处的主体进行对焦。应理解,目标相差数据还可以是根据多个对焦框中的相差计算的其他值,例如目标相差数据是多个对焦框中的相差的最大值,本申请实施例对该目标相差数据的计算方式不作限定。In an implementation using phase focusing, the electronic device determines that the average value of the phase differences in multiple focus frames is the target phase difference data, and further, the electronic device searches for the target offset corresponding to the target phase difference data, thereby driving the voice coil motor to move Target offset to adjust the position of the lens to focus on the subject at the point of attention. It should be understood that the target phase difference data may also be other values calculated based on the phase differences in multiple focus frames. For example, the target phase difference data is the maximum value of the phase differences in multiple focus frames. Not limited.
示例性的,如图13C所示,目标相差数据可以为对焦框1301、1302、1303、1304、1305和1306中的相差的均值。Exemplarily, as shown in FIG. 13C , the target phase difference data may be the average value of the phase differences in the focus frames 1301 , 1302 , 1303 , 1304 , 1305 and 1306 .
在另一些实施例中,电子设备还可以应用反差对焦、激光对焦、或组合对焦的方式进行对焦。组合对焦为相位对焦、反差对焦、激光对焦中的任意两个或任意三个对焦的方式进行对焦。对于对焦的方式,本申请实施例不作限定。In some other embodiments, the electronic device may also use contrast focusing, laser focusing, or combined focusing to focus. Combined focus is any two or any three of phase focus, contrast focus, and laser focus. The focusing manner is not limited in this embodiment of the present application.
下面介绍本申请实施例提供的示例性电子设备100。The exemplary electronic device 100 provided by the embodiment of the present application is introduced below.
图14示出了电子设备100的结构示意图。FIG. 14 shows a schematic structural diagram of the electronic device 100 .
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that, the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 . In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components. The illustrated components can be realized in hardware, software or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器 (application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。Wherein, the controller may be the nerve center and command center of the electronic device 100 . The controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。The I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flashlight, the camera 193 and the like through different I2C bus interfaces. For example, the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 100 .
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。The I2S interface can be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 . In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。The PCM interface can also be used for audio communication, sampling, quantizing and encoding the analog signal. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。The UART interface is a universal serial data bus used for asynchronous communication. The bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 and the wireless communication module 160 . For example: the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function. In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface, DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。The MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 . MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc. In some embodiments, the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 . The processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。The GPIO interface can be configured by software. The GPIO interface can be configured as a control signal or as a data signal. In some embodiments, the GPIO interface can be used to connect the processor 110 with the camera 193 , the display screen 194 , the wireless communication module 160 , the audio module 170 , the sensor module 180 and so on. The GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。The USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like. The USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as AR devices.
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 . In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。The charging management module 140 is configured to receive a charging input from a charger. Wherein, the charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 can receive charging input from the wired charger through the USB interface 130 . In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also provide power for electronic devices through the power management module 141 .
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。The power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 . The power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 . The power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance). In some other embodiments, the power management module 141 may also be disposed in the processor 110 . In some other embodiments, the power management module 141 and the charging management module 140 may also be set in the same device.
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。The wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 . The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like. The mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation. The mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation. In some embodiments, at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 . In some embodiments, at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。A modem processor may include a modulator and a demodulator. Wherein, the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing. The low-frequency baseband signal is passed to the application processor after being processed by the baseband processor. The application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 . In some embodiments, the modem processor may be a stand-alone device. In some other embodiments, the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。The wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100. System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 . The wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc. The GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。The display screen 194 is used to display images, videos and the like. The display screen 194 includes a display panel. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc. In some embodiments, the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
本申请实施例中,显示面板可采用OLED、AMOLED、FLED实现,使得显示屏194可以被弯折。本申请实施例中,将可以被弯折的显示屏称为可折叠显示屏。其中,该可折叠显示屏可以是一块屏幕,也可以是多块屏幕拼凑在一起组合成的显示屏,在此不作限定。In the embodiment of the present application, the display panel can be realized by using OLED, AMOLED, or FLED, so that the display screen 194 can be bent. In the embodiments of the present application, a display screen that can be bent is called a foldable display screen. Wherein, the foldable display screen may be one screen, or may be a display screen composed of multiple screens pieced together, which is not limited here.
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。The electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。Camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects it to the photosensitive element. The photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. DSP converts digital image signals into standard RGB, YUV and other image signals. In some embodiments, the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
本申请实施例中,摄像头193包括前置摄像头和后置摄像头,其中,前置摄像头可以用户获取目标图像,电子设备基于目标图像确定用户在电子设备的显示屏上的注意力点的位置信息,关于目标图像的说明可以参见上述图9中S902的相关描述,关于基于目标图像确定注意力点的位置信息的过程可以参见上述图9中S903-S905的相关描述,此处不再赘述。In the embodiment of the present application, the camera 193 includes a front camera and a rear camera, wherein the front camera can obtain a target image for the user, and the electronic device determines the position information of the user's attention point on the display screen of the electronic device based on the target image. For the description of the target image, refer to the related description of S902 in FIG. 9 above. For the process of determining the position information of the attention point based on the target image, refer to the related description of S903-S905 in FIG. 9 above, which will not be repeated here.
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。The ISP is used for processing the data fed back by the camera 193 . For example, when taking a picture, open the shutter, the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be located in the camera 193 . Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
在本申请实施例中,ISP还可以用于基于注意力点的位置信息对注意力点处的主体进行自动对焦,具体的对焦过程可以参见上述图9中S906-S907中的相关描述,此处不再赘述。In the embodiment of the present application, the ISP can also be used to automatically focus on the subject at the attention point based on the position information of the attention point. For the specific focusing process, please refer to the relevant description in S906-S907 in FIG. 9 above, which will not be repeated here. repeat.
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。The NPU is a neural-network (NN) computing processor. By referring to the structure of biological neural networks, such as the transfer mode between neurons in the human brain, it can quickly process input information and continuously learn by itself. Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
在本申请实施例中,NPU可以用于识别目标图像中的人脸、眼部的图像,具体可以参见上述图9中S903中的相关描述。NPU也可以用于基于注意力点识别注意力点周围的拍摄主体,进一步可以基于拍摄主体的位置信息进行对焦,该过程具体说明可以参见上述图9中S906中实现方式B和S907的相关描述,此处不再赘述。In the embodiment of the present application, the NPU may be used to identify images of human faces and eyes in the target image. For details, please refer to the relevant description in S903 in FIG. 9 above. The NPU can also be used to identify the subject around the attention point based on the attention point, and further focus based on the position information of the subject. For a specific description of this process, please refer to the relevant descriptions of the implementations B and S907 in S906 in FIG. 9 above, here No longer.
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。The external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应 用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。The internal memory 121 may be used to store computer-executable program codes including instructions. The processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 . The internal memory 121 may include an area for storing programs and an area for storing data. Wherein, the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like. The storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。The audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。 Speaker 170A, also referred to as a "horn", is used to convert audio electrical signals into sound signals. Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。Receiver 170B, also called "earpiece", is used to convert audio electrical signals into sound signals. When the electronic device 100 receives a call or a voice message, the receiver 170B can be placed close to the human ear to receive the voice.
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。The microphone 170C, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a phone call or sending a voice message, the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In some other embodiments, the electronic device 100 may be provided with two microphones 170C, which may also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The earphone interface 170D is used for connecting wired earphones. The earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。The pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 180A may be disposed on display screen 194 . There are many types of pressure sensors 180A, such as resistive pressure sensors, inductive pressure sensors, and capacitive pressure sensors. A capacitive pressure sensor may be comprised of at least two parallel plates with conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the intensity of pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A. In some embodiments, touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view short messages is executed. When a touch operation whose intensity is greater than or equal to the first pressure threshold acts on the icon of the short message application, the instruction of creating a new short message is executed.
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺 仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。The gyro sensor 180B can be used to determine the motion posture of the electronic device 100 . In some embodiments, the angular velocity of the electronic device 100 around three axes (ie, x, y and z axes) may be determined by the gyro sensor 180B. The gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse movement to achieve anti-shake. The gyro sensor 180B can also be used for navigation and somatosensory game scenes.
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。The air pressure sensor 180C is used to measure air pressure. In some embodiments, the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。The magnetic sensor 180D includes a Hall sensor. The electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case. In some embodiments, when the electronic device 100 is a clamshell machine, the electronic device 100 can detect opening and closing of the clamshell according to the magnetic sensor 180D. Furthermore, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, features such as automatic unlocking of the flip cover are set.
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。The acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。The distance sensor 180F is used to measure the distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing.
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes. The light emitting diodes may be infrared light emitting diodes. The electronic device 100 emits infrared light through the light emitting diode. Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 . The electronic device 100 can use the proximity light sensor 180G to detect that the user is holding the electronic device 100 close to the ear to make a call, so as to automatically turn off the screen to save power. The proximity light sensor 180G can also be used in leather case mode, automatic unlock and lock screen in pocket mode.
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。The ambient light sensor 180L is used for sensing ambient light brightness. The electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness. The ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures. The ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket, so as to prevent accidental touch.
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。The fingerprint sensor 180H is used to collect fingerprints. The electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。The temperature sensor 180J is used to detect temperature. In some embodiments, the electronic device 100 uses the temperature detected by the temperature sensor 180J to implement a temperature treatment strategy. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 100 may reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to prevent the electronic device 100 from being shut down abnormally due to the low temperature. In some other embodiments, when the temperature is lower than another threshold, the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。Touch sensor 180K, also known as "touch panel". The touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”. The touch sensor 180K is used to detect a touch operation on or near it. The touch sensor can pass the detected touch operation to the application processor to determine the type of touch event. Visual output related to the touch operation can be provided through the display screen 194 . In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。The bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the human pulse and receive the blood pressure beating signal. In some embodiments, the bone conduction sensor 180M can also be disposed in the earphone, combined into a bone conduction earphone. The audio module 170 can analyze the voice signal based on the vibration signal of the vibrating bone mass of the vocal part acquired by the bone conduction sensor 180M, so as to realize the voice function. The application processor can analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。The keys 190 include a power key, a volume key and the like. The key 190 may be a mechanical key. It can also be a touch button. The electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。The motor 191 can generate a vibrating reminder. The motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback. For example, touch operations applied to different applications (such as taking pictures, playing audio, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects for touch operations acting on different areas of the display screen 194 . Different application scenarios (for example: time reminder, receiving information, alarm clock, games, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect can also support customization.
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。The indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。The SIM card interface 195 is used for connecting a SIM card. The SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 . The electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1. SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards may be the same or different. The SIM card interface 195 is also compatible with different types of SIM cards. The SIM card interface 195 is also compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as calling and data communication. In some embodiments, the electronic device 100 adopts an eSIM, that is, an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
上述实施例中所用,根据上下文,术语“当…时”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。As used in the above embodiments, depending on the context, the term "when" may be interpreted to mean "if" or "after" or "in response to determining..." or "in response to detecting...". Similarly, depending on the context, the phrases "in determining" or "if detected (a stated condition or event)" may be interpreted to mean "if determining..." or "in response to determining..." or "on detecting (a stated condition or event)" or "in response to detecting (a stated condition or event)".
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质 或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如DVD)、或者半导体介质(例如固态硬盘)等。In the above embodiments, all or part of them may be implemented by software, hardware, firmware or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part. The computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, DSL) or wireless (eg, infrared, wireless, microwave, etc.) means. The computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media. The available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, solid state hard disk), etc.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments are realized. The processes can be completed by computer programs to instruct related hardware. The programs can be stored in computer-readable storage media. When the programs are executed , may include the processes of the foregoing method embodiments. The aforementioned storage medium includes: ROM or random access memory RAM, magnetic disk or optical disk, and other various media that can store program codes.
Claims (14)
- 一种对焦的方法,其特征在于,所述方法包括:A method for focusing, characterized in that the method comprises:响应于用户的第一操作,电子设备开始拍摄,显示第一界面,在所述第一界面显示通过摄像头采集到的预览画面;In response to the user's first operation, the electronic device starts shooting, displays a first interface, and displays a preview image captured by the camera on the first interface;所述电子设备在所述第一界面显示第一预览画面,所述第一预览画面为所述摄像头以满足预设条件的注意力点为对焦点采集的预览画面,所述注意力点为用户的视线落在所述第一界面的位置点。The electronic device displays a first preview picture on the first interface, the first preview picture is a preview picture collected by the camera to meet the preset condition and the attention point is the focus point, and the attention point is the user's line of sight The position point that falls on the first interface.
- 根据权利要求1所述的方法,其特征在于,所述电子设备在所述第一界面显示第一预览画面,具体包括:The method according to claim 1, wherein the electronic device displays a first preview screen on the first interface, specifically comprising:所述电子设备通过前置摄像头获取目标图像,所述目标图像包括用户的眼部的图像;The electronic device acquires a target image through a front camera, and the target image includes an image of the user's eyes;所述电子设备基于所述目标图像确定用户的注意力点;The electronic device determines a user's attention point based on the target image;在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面;When the user's attention point satisfies the preset condition, adjusting the focal length of the camera with the user's attention point as the focus point, and collecting the first preview image through the camera;在所述第一界面显示所述第一预览画面。displaying the first preview image on the first interface.
- 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:The method according to claim 1 or 2, characterized in that the method further comprises:所述电子设备检测到开启注意力点对焦功能的第二操作;The electronic device detects a second operation of turning on the attention point focusing function;响应于所述第二操作,所述电子设备在所述第一界面显示所述第一预览画面。In response to the second operation, the electronic device displays the first preview image on the first interface.
- 根据权利要求2或3所述的方法,其特征在于,所述电子设备基于所述目标图像确定用户的注意力点,具体包括:The method according to claim 2 or 3, wherein the electronic device determines the user's attention point based on the target image, specifically comprising:在基于所述目标图像检测到用户的注意力点时,所述电子设备确定检测到注意力点的持续时间;When the user's attention point is detected based on the target image, the electronic device determines a duration for which the attention point is detected;所述在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:When the user's attention point satisfies the preset condition, adjusting the focal length of the camera with the user's attention point as the focus point, and collecting the first preview image through the camera specifically includes:在所述注意力点的持续时间不小于第一时长阈值时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面。When the duration of the attention point is not less than the first duration threshold, the user's attention point is used as the focus point, the focal length of the camera is adjusted, and the first preview image is captured by the camera.
- 根据权利要求2-4任一项所述的方法,其特征在于,所述电子设备基于所述目标图像确定用户的注意力点,具体包括:The method according to any one of claims 2-4, wherein the electronic device determines the user's attention point based on the target image, specifically comprising:在基于所述目标图像检测到用户的注意力点时,所述电子设备确定检测到注意力点的持续时间,以及检测不到注意力点的间断时间;When the user's attention point is detected based on the target image, the electronic device determines the duration of the detection of the attention point and the discontinuity time during which the attention point is not detected;所述在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:When the user's attention point satisfies the preset condition, adjusting the focal length of the camera with the user's attention point as the focus point, and collecting the first preview image through the camera specifically includes:在所述检测到注意力点的持续时间不小于第二时长阈值,且所述检测不到注意力点的间断时间小于第三时长阈值时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距, 通过所述摄像头采集所述第一预览画面。When the duration of the detected attention point is not less than the second duration threshold, and the intermittent time of the non-detection of the attention point is less than the third duration threshold, adjust the camera with the user's attention point as the focus point The focal length is used to collect the first preview image through the camera.
- 根据权利要求1-5任一项所述的方法,其特征在于,所述电子设备基于所述目标图像确定用户的注意力点,具体包括:The method according to any one of claims 1-5, wherein the electronic device determines the user's attention point based on the target image, specifically comprising:所述电子设备基于所述目标图像确定用户的注意力点,并检测用户动作;The electronic device determines a user's attention point based on the target image, and detects a user action;所述在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:When the user's attention point satisfies the preset condition, adjusting the focal length of the camera with the user's attention point as the focus point, and collecting the first preview image through the camera specifically includes:所述检测到目标图像中用户动作为设定动作时,以所述用户的当前注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,所述设定动作包括瞳孔放大、“点头”和“OK”手势的一种或多种。When it is detected that the user's action in the target image is a setting action, the user's current attention point is used as the focus point, the focal length of the camera is adjusted, and the first preview image is collected by the camera, and the setting Actions include one or more of pupil dilation, "nod" and "OK" gestures.
- 根据权利要求2-6任一项所述的方法,其特征在于,所述电子设备基于所述目标图像确定用户的注意力点,具体包括:The method according to any one of claims 2-6, wherein the electronic device determines the user's attention point based on the target image, specifically comprising:在第一时刻,所述电子设备基于所述目标图像确定检测到所述用户在第一注意力点的第一持续时间;At a first moment, the electronic device determines, based on the target image, a first duration during which the user is detected at a first attention point;所述在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:When the user's attention point satisfies the preset condition, adjusting the focal length of the camera with the user's attention point as the focus point, and collecting the first preview image through the camera specifically includes:在检测到所述第一持续时间大于第四时长阈值时,以所述第一注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面;When it is detected that the first duration is greater than the fourth duration threshold, using the first attention point as the focus point, adjusting the focal length of the camera, and collecting the first preview image through the camera;所述方法还包括:The method also includes:在第二时刻,所述电子设备基于所述目标图像确定检测到所述用户在第二注意力点的第二持续时间,所述第一注意力点与第二注意力点为不同位置的注意力点,所述第二时刻在第一时刻之后;At the second moment, the electronic device determines, based on the target image, the second duration of detecting that the user is at the second attention point, the first attention point and the second attention point are attention points at different positions, so said second moment is after the first moment;在检测到所述第二持续时间大于第四时长阈值时,以所述第二注意力点为对焦点,再次调整所述摄像头的焦距,通过所述摄像头采集所述第二预览画面。When it is detected that the second duration is greater than the fourth duration threshold, the second attention point is used as a focus point, and the focal length of the camera is adjusted again, and the second preview image is captured by the camera.
- 根据权利要求2-7任一项所述的方法,其特征在于,所述电子设备基于所述目标图像确定用户的注意力点,具体包括:The method according to any one of claims 2-7, wherein the electronic device determines the user's attention point based on the target image, specifically comprising:所述电子设备基于所述目标图像确定的所述用户的注意力点的位置;The position of the user's attention point determined by the electronic device based on the target image;在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:When the user's attention point satisfies the preset condition, adjusting the focal length of the camera with the user's attention point as the focus point, and collecting the first preview image through the camera, specifically includes:在所述用户的注意力点的位置处于目标画面中的多个物体的交界位置的情况下,所述电子设备基于所述用户声音信息匹配所述目标画面中的物体;When the position of the user's attention point is at the intersection of multiple objects in the target picture, the electronic device matches the objects in the target picture based on the user voice information;当所述电子设备匹配到第一对焦物时,以所述第一对焦物为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,所述第一对焦物为所述目标画面中的多个物体中的一个景物或一个人。When the electronic device matches the first focusing object, the first focusing object is used as the focusing point, the focal length of the camera is adjusted, and the first preview image is collected by the camera, and the first focusing object is A scene or a person among the multiple objects in the target frame.
- 根据权利要求2-8任一项所述的方法,其特征在于,所述电子设备基于所述目标图像 确定用户的注意力点,具体包括:The method according to any one of claims 2-8, wherein the electronic device determines the user's attention point based on the target image, specifically comprising:所述电子设备基于所述目标图像确定的所述用户的注意力点的位置,以及所述注意力点的位置物体的种类;The position of the user's attention point determined by the electronic device based on the target image, and the type of the object at the position of the attention point;在所述用户的注意力点满足所述预设条件时,以所述用户的注意力点为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面,具体包括:When the user's attention point satisfies the preset condition, adjusting the focal length of the camera with the user's attention point as the focus point, and collecting the first preview image through the camera, specifically includes:在所述注意力点处于目标画面中的多个种类的物体的交界位置的情况下,以上一次对焦物种类相同的对焦物为对焦点,调整所述摄像头的焦距,通过所述摄像头采集所述第一预览画面。In the case where the attention point is at the intersection of multiple types of objects in the target picture, the focus object of the same type as the last focus object is used as the focus point, the focal length of the camera is adjusted, and the first focus object is collected by the camera. A preview screen.
- 根据权利要求2-9任一项所述的方法,其特征在于,所述第一预览画面包括对焦框,所述注意力点为所述对焦框的中心位置。The method according to any one of claims 2-9, wherein the first preview image includes a focus frame, and the attention point is a center position of the focus frame.
- 根据权利要求2-10任一项所述的方法,其特征在于,所述第一预览画面包括对焦框,所述注意力点的拍摄主体的中心为所述对焦框的中心位置。The method according to any one of claims 2-10, wherein the first preview image includes a focus frame, and the center of the subject of the attention point is the center of the focus frame.
- 根据权利要求2-11任一项所述的方法,其特征在于,所述电子设备基于所述目标图像确定用户的注意力点包括:The method according to any one of claims 2-11, wherein the electronic device determining the user's attention point based on the target image comprises:所述第一人的注意力点为P g,表示为: The attention point of the first person is P g , expressed as:P g=O e+c·V o+λ·V g P g =O e +c·V o +λ·V g其中,λ为角膜中心O c与注意力点P g的模,表示为: Among them, λ is the modulus of the corneal center O c and the attention point P g , expressed as:所述第一人的眼球结构参数中,包括:眼球中心O e,Kappa角为实收和光轴的夹角,Kappa角的水平分量α,Kappa角垂直分量β, R为旋转参数,t为平移参数, 为头部坐标系的坐标,V s为电子设备的显示屏所在平面的单位法向量; The eyeball structural parameters of the first person include: eyeball center O e , Kappa angle is the angle between the actual receiving angle and the optical axis, the horizontal component α of the Kappa angle, the vertical component β of the Kappa angle, R is the rotation parameter, t is the translation parameter, Be the coordinates of the head coordinate system, V s is the unit normal vector of the plane where the display screen of the electronic device is located;光轴单位向量V o的偏转角为 V o为光轴单位向量,表示为: The deflection angle of the optical axis unit vector V o is V o is the unit vector of the optical axis, expressed as:V g为视轴单位向量,表示为: V g is the visual axis unit vector, expressed as:
- 一种电子设备,其特征在于,包括:触控屏、摄像头、一个或多个处理器和一个或多个存储器;所述一个或多个处理器与所述触控屏、所述摄像头、所述一个或多个存储器耦合,所述一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令, 当所述一个或多个处理器执行所述计算机指令时,使得所述电子设备执行如权利要求1-12中的任一项所述的方法。An electronic device, characterized in that it includes: a touch screen, a camera, one or more processors, and one or more memories; the one or more processors and the touch screen, the camera, and the The one or more memories are coupled, the one or more memories are used to store computer program codes, the computer program codes include computer instructions, and when the one or more processors execute the computer instructions, the electronic device Performing the method according to any one of claims 1-12.
- 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-12中任一项所述的方法。A computer-readable storage medium, comprising instructions, wherein, when the instructions are run on an electronic device, the electronic device is made to execute the method according to any one of claims 1-12.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110713771.5 | 2021-06-25 | ||
CN202110713771.5A CN113572956A (en) | 2021-06-25 | 2021-06-25 | Focusing method and related equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022267464A1 true WO2022267464A1 (en) | 2022-12-29 |
Family
ID=78162798
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/072712 WO2022267464A1 (en) | 2021-06-25 | 2022-01-19 | Focusing method and related device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113572956A (en) |
WO (1) | WO2022267464A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117097985A (en) * | 2023-10-11 | 2023-11-21 | 荣耀终端有限公司 | Focusing method, electronic device and computer readable storage medium |
WO2024169363A1 (en) * | 2023-02-17 | 2024-08-22 | 荣耀终端有限公司 | Focus tracking method, focusing method, photographing method, and electronic device |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113572956A (en) * | 2021-06-25 | 2021-10-29 | 荣耀终端有限公司 | Focusing method and related equipment |
CN116074624B (en) * | 2022-07-22 | 2023-11-10 | 荣耀终端有限公司 | Focusing method and device |
CN117472256B (en) * | 2023-12-26 | 2024-08-23 | 荣耀终端有限公司 | Image processing method and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103905709A (en) * | 2012-12-25 | 2014-07-02 | 联想(北京)有限公司 | Electronic device control method and electronic device |
CN106303193A (en) * | 2015-05-25 | 2017-01-04 | 展讯通信(天津)有限公司 | Image capturing method and device |
CN106331498A (en) * | 2016-09-13 | 2017-01-11 | 青岛海信移动通信技术股份有限公司 | Image processing method and image processing device used for mobile terminal |
CN110177210A (en) * | 2019-06-17 | 2019-08-27 | Oppo广东移动通信有限公司 | Photographic method and relevant apparatus |
CN111510626A (en) * | 2020-04-21 | 2020-08-07 | Oppo广东移动通信有限公司 | Image synthesis method and related device |
CN112261300A (en) * | 2020-10-22 | 2021-01-22 | 维沃移动通信(深圳)有限公司 | Focusing method and device and electronic equipment |
CN113572956A (en) * | 2021-06-25 | 2021-10-29 | 荣耀终端有限公司 | Focusing method and related equipment |
-
2021
- 2021-06-25 CN CN202110713771.5A patent/CN113572956A/en active Pending
-
2022
- 2022-01-19 WO PCT/CN2022/072712 patent/WO2022267464A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103905709A (en) * | 2012-12-25 | 2014-07-02 | 联想(北京)有限公司 | Electronic device control method and electronic device |
CN106303193A (en) * | 2015-05-25 | 2017-01-04 | 展讯通信(天津)有限公司 | Image capturing method and device |
CN106331498A (en) * | 2016-09-13 | 2017-01-11 | 青岛海信移动通信技术股份有限公司 | Image processing method and image processing device used for mobile terminal |
CN110177210A (en) * | 2019-06-17 | 2019-08-27 | Oppo广东移动通信有限公司 | Photographic method and relevant apparatus |
CN111510626A (en) * | 2020-04-21 | 2020-08-07 | Oppo广东移动通信有限公司 | Image synthesis method and related device |
CN112261300A (en) * | 2020-10-22 | 2021-01-22 | 维沃移动通信(深圳)有限公司 | Focusing method and device and electronic equipment |
CN113572956A (en) * | 2021-06-25 | 2021-10-29 | 荣耀终端有限公司 | Focusing method and related equipment |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024169363A1 (en) * | 2023-02-17 | 2024-08-22 | 荣耀终端有限公司 | Focus tracking method, focusing method, photographing method, and electronic device |
CN117097985A (en) * | 2023-10-11 | 2023-11-21 | 荣耀终端有限公司 | Focusing method, electronic device and computer readable storage medium |
CN117097985B (en) * | 2023-10-11 | 2024-04-02 | 荣耀终端有限公司 | Focusing method, electronic device and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113572956A (en) | 2021-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3866458B1 (en) | Method and device for capturing images | |
CN110035141B (en) | Shooting method and equipment | |
EP3937480A1 (en) | Multi-path video recording method and device | |
WO2022267464A1 (en) | Focusing method and related device | |
WO2021129198A1 (en) | Method for photography in long-focal-length scenario, and terminal | |
US11272116B2 (en) | Photographing method and electronic device | |
CN113475057A (en) | Video frame rate control method and related device | |
WO2020015149A1 (en) | Wrinkle detection method and electronic device | |
WO2023273323A9 (en) | Focusing method and electronic device | |
CN113467735A (en) | Image adjusting method, electronic device and storage medium | |
CN113572957B (en) | Shooting focusing method and related equipment | |
WO2022068505A1 (en) | Photographing method and electronic device | |
WO2022033344A1 (en) | Video stabilization method, and terminal device and computer-readable storage medium | |
CN114302063B (en) | Shooting method and equipment | |
WO2022105670A1 (en) | Display method and terminal | |
WO2023071497A1 (en) | Photographing parameter adjusting method, electronic device, and storage medium | |
WO2021197014A1 (en) | Picture transmission method and apparatus | |
CN116055872B (en) | Image acquisition method, electronic device, and computer-readable storage medium | |
CN116437194B (en) | Method, apparatus and readable storage medium for displaying preview image | |
RU2789447C1 (en) | Method and apparatus for multichannel video recording | |
RU2822535C2 (en) | Method and device for multichannel video recording | |
WO2021063156A1 (en) | Method for adjusting folding angle of electronic device, and electronic device | |
CN114691066A (en) | Application display method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22826973 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22826973 Country of ref document: EP Kind code of ref document: A1 |