WO2022267464A1 - Procédé de focalisation et dispositif associé - Google Patents

Procédé de focalisation et dispositif associé Download PDF

Info

Publication number
WO2022267464A1
WO2022267464A1 PCT/CN2022/072712 CN2022072712W WO2022267464A1 WO 2022267464 A1 WO2022267464 A1 WO 2022267464A1 CN 2022072712 W CN2022072712 W CN 2022072712W WO 2022267464 A1 WO2022267464 A1 WO 2022267464A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
electronic device
attention point
camera
point
Prior art date
Application number
PCT/CN2022/072712
Other languages
English (en)
Chinese (zh)
Inventor
林梦然
邵涛
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2022267464A1 publication Critical patent/WO2022267464A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Definitions

  • the present application relates to the field of terminal technologies, and in particular to a focusing method and related equipment.
  • Smart terminal devices such as mobile phones, tablet computers, etc.
  • the device can drive the voice coil motor to adjust based on the position of the focus point selected by the user
  • the position of the lens is to change the distance between the lens and the image sensor, so that the focal plane falls on the image sensor to achieve focusing, so as to capture a clear image of the focus area.
  • the embodiment of the present application discloses a focusing method and related equipment, which can automatically focus according to the user's attention point, reduce user operations, and improve user experience.
  • the embodiment of the present application discloses a focusing method, including: in response to the user's first operation, the electronic device starts shooting, displays a first interface, and displays a preview image captured by the camera on the first interface ;
  • the electronic device displays a first preview picture on the first interface, the first preview picture is a preview picture collected by the camera to meet the preset condition and the attention point is the focus point, and the attention point is the user's The line of sight falls on the position point of the first interface.
  • the first operation may be an operation in which the user taps the camera application on the screen to start shooting.
  • the electronic device when the user holds the electronic device to take pictures, the electronic device can also automatically focus based on the acquired position of the user's attention point in the preview screen, which can accurately focus on the user's attention point and is easy to operate , which can improve user experience.
  • the electronic device displays a first preview image on the first interface, which specifically includes: the electronic device acquires a target image through a front camera, and the target image includes the image of the user's eyes. image; the electronic device determines the user's attention point based on the target image; when the user's attention point satisfies the preset condition, adjusting the focal length of the camera with the user's attention point as the focus point, collecting the first preview image through the camera; and displaying the first preview image on the first interface.
  • the electronic device obtains the user's target image in real time through the front camera and determines the position of the attention point, providing a stable focus point input source, and the electronic device can continuously focus, thereby continuously obtaining clear images of the user's interest area.
  • user experience can be improved.
  • the method further includes: the electronic device detects a second operation for enabling the focus function; in response to the second operation, the electronic device displays on the first interface The first preview screen.
  • the second operation may be a trigger operation to enable the attention focus function, or a voice command to enable the attention focus function.
  • a trigger operation to enable the attention focus function
  • a voice command to enable the attention focus function.
  • the electronic device determining the user's attention point based on the target image specifically includes: when the user's attention point is detected based on the target image, the electronic device determines that the attention point is detected duration; when the user’s attention point satisfies the preset condition, using the user’s attention point as the focus point, adjust the focal length of the camera, and capture the first preview image through the camera , specifically comprising: when the duration of the attention point is not less than a first duration threshold, using the user's attention point as a focus point, adjusting the focal length of the camera, and capturing the first preview image through the camera.
  • the attention point can be focused, thereby improving the stability of the preview image.
  • the duration of detecting the attention point may be the duration of the user's fixation on a certain attention point.
  • the electronic device determining the user's attention point based on the target image specifically includes: when the user's attention point is detected based on the target image, the electronic device determines that the attention point is detected duration, and the intermittent time when no attention point is detected; when the user’s attention point satisfies the preset condition, the focus point of the user’s attention point is used as the focus point to adjust the focal length of the camera, by
  • the camera capturing the first preview image specifically includes: when the duration of the detected attention point is not less than the second duration threshold, and the intermittent time for which the attention point is not detected is less than the third duration threshold, using The focus point of the user is the focus point, the focal length of the camera is adjusted, and the first preview image is collected by the camera.
  • the focus point of the preview image can remain unchanged, thereby improving the stability of the image and improving user experience.
  • the duration of the detected attention point may be the duration of the user's gazing at the first attention point
  • the intermittent time when the attention point is not detected may be the duration of the user's sight leaving the first attention point.
  • the electronic device determining the user's attention point based on the target image specifically includes: the electronic device determining the user's attention point based on the target image, and detecting a user action; When the user's attention point satisfies the preset condition, adjusting the focal length of the camera with the user's attention point as the focus point, and collecting the first preview image through the camera, specifically includes: the detecting When the user's action in the target image is a setting action, take the user's current attention point as the focus point, adjust the focal length of the camera, and collect the first preview image through the camera, and the setting action includes pupil One or more of zoom in, "nod" and "OK" gestures.
  • the electronic device can perceive the user's actions, indicating that the user is paying close attention to the current attention object point, and the obtained image is more in line with the user's intention, and does not require manual or other operations by the user, which improves the user experience.
  • the target image may include the user's eyes, may also include the user's face, and may also include the user's body parts.
  • the electronic device determining the user's attention point based on the target image specifically includes: at a first moment, the electronic device determines based on the target image that the user is in the first The first duration of the attention point; when the user's attention point satisfies the preset condition, the user's attention point is used as the focus point to adjust the focal length of the camera, and the camera captures the
  • the first preview image specifically includes: when it is detected that the first duration is greater than the fourth duration threshold, using the first attention point as the focus point, adjusting the focal length of the camera, and collecting the first image through the camera A preview screen; the method further includes: at a second moment, the electronic device determines, based on the target image, a second duration for detecting that the user is at a second point of attention, and the first point of attention and the second point of attention are The point of attention is a point of attention at a different position, and the second moment is after the first moment; when it is detected that the second duration is greater than the fourth duration threshold,
  • the focal length of the camera is used to collect the second preview image through the camera. In this way, when the user's attention is shifted from one position to its adjacent position, each focusing position only needs to adjust a small motor distance each time, achieving smooth focusing, and saving money while improving the imaging quality of the image. reduce the resource consumption of electronic devices.
  • the first duration may be the duration of the user's gaze on the first attention point
  • the second duration may be the duration of the user's gaze on the second attention point.
  • the electronic device determines the user's attention point based on the target image, which specifically includes: the electronic device determines the position of the user's attention point based on the target image; When the user's attention point satisfies the preset condition, adjusting the focal length of the camera with the user's attention point as the focus point, and collecting the first preview image through the camera, specifically includes: When the position of the attention point is at the intersection of multiple objects in the target picture, the electronic device matches the objects in the target picture based on the user voice information; when the electronic device matches the first focus object , using the first focusing object as the focusing point, adjusting the focal length of the camera, and collecting the first preview image through the camera, the first focusing object being one of the multiple objects in the target image scenery or a person. In this way, the object that the user pays attention to can be determined more accurately, and the object can be focused on, thereby improving the accuracy of focusing.
  • the electronic device determines the user's attention point based on the target image, which specifically includes: the position of the user's attention point determined by the electronic device based on the target image, and the The location of the attention point and the type of object; when the user's attention point satisfies the preset condition, take the user's attention point as the focus point, adjust the focal length of the camera, and collect the first
  • the preview image specifically includes: when the attention point is at the intersection of multiple types of objects in the target image, adjusting the focal length of the camera with the focus object of the same type as the previous focus object as the focus point, and using the The camera captures the first preview image. In this way, the object that the user pays attention to can be determined more accurately, and the object can be focused on, thereby improving the accuracy of focusing.
  • the first preview image includes a focus frame, and the attention point is a center position of the focus frame. In this way, the accuracy of the focus position in the preview image can be guaranteed.
  • the first preview image includes a focus frame, and the center of the subject of the attention point is the center of the focus frame. In this way, the accuracy of the focus position in the preview image can be guaranteed.
  • the electronic device determining the user's attention point based on the target image includes:
  • is the modulus of the corneal center Oc and the attention point Pg, expressed as:
  • the eyeball structural parameters of the first person include: eyeball center O e , Kappa angle is the angle between the actual receiving angle and the optical axis, the horizontal component ⁇ of the Kappa angle, the vertical component ⁇ of the Kappa angle, R is the rotation parameter, t is the translation parameter, Be the coordinates of the head coordinate system, V s is the unit normal vector of the plane where the display screen of the electronic device is located;
  • the deflection angle of the optical axis unit vector Vo is Vo is the unit vector of the optical axis, expressed as:
  • Vg is the visual axis unit vector, expressed as:
  • the embodiment of the present application discloses an electronic device, including: a processor, a camera, and a touch screen.
  • the processor is configured to, in response to a first user operation, instruct the camera to start shooting, instruct the touch screen to display a first interface, and display a preview image collected by the camera on the first interface;
  • the processor is further configured to instruct the touch screen to display a first preview picture on the first interface, and the first preview picture is a preview captured by the camera meeting the preset condition of the attention point as the focus point screen, the point of attention is the point where the user's line of sight falls on the first interface.
  • the first operation may be an operation in which the user taps the camera application on the screen to start shooting.
  • the electronic device when the user holds the electronic device to take pictures, the electronic device can also automatically focus based on the acquired position of the user's attention point in the preview screen, which can accurately focus on the user's attention point and is easy to operate , which can improve user experience.
  • the processor instructs the touch screen to display a first interface to display a first preview image, specifically including: the processor, configured to acquire an image of a target through a front camera, and the target The image includes an image of the user's eyes; the processor is further configured to determine the user's attention point based on the target image; when the user's attention point satisfies the preset condition, the user's attention point For focusing, adjust the focal length of the camera, collect the first preview picture through the camera; display the first preview picture on the first interface.
  • the electronic device obtains the user's target image in real time through the front camera and determines the position of the attention point, providing a stable focus point input source, and the electronic device can continuously focus, thereby continuously obtaining clear images of the user's interest area.
  • user experience can be improved.
  • the processor is further configured to detect a second operation for enabling the point of attention focusing function; the processor is further configured to, in response to the second operation, indicate that the touch The screen displays the first preview image on the first interface.
  • the second operation may be a trigger operation to enable the attention focus function, or a voice command to enable the attention focus function.
  • a trigger operation to enable the attention focus function
  • a voice command to enable the attention focus function.
  • the processor determining the user's attention point based on the target image specifically includes: when the user's attention point is detected based on the target image, the electronic device determines that the attention point is detected duration; when the user’s attention point satisfies the preset condition, using the user’s attention point as the focus point, adjust the focal length of the camera, and capture the first preview image through the camera , specifically comprising: when the duration of the attention point is not less than a first duration threshold, using the user's attention point as a focus point, adjusting the focal length of the camera, and capturing the first preview image through the camera.
  • the attention point can be focused, thereby improving the stability of the preview image.
  • the duration of detecting the attention point may be the duration of the user's fixation on a certain attention point.
  • the processor determining the user's attention point based on the target image specifically includes: when the user's attention point is detected based on the target image, determining the duration of the detected attention point, And the intermittent time when the attention point cannot be detected; when the user’s attention point satisfies the preset condition, the focus point of the user’s attention point is used as the focus point to adjust the focal length of the camera, and the camera collects
  • the first preview screen specifically includes: when the duration of the detected attention point is not less than the second duration threshold, and the intermittent time of the non-detection of the attention point is less than the third duration threshold, the user's The attention point is the focusing point, the focal length of the camera is adjusted, and the first preview image is captured by the camera. In this way, when the user's line of sight is unstable, for example, when the user glances at other places and then looks back, the focus point of the preview image can remain unchanged, thereby improving the stability of the image and improving user experience.
  • the duration of the detected attention point may be the user's gaze duration for the first attention point
  • the intermittent time for the non-detected attention point may be the duration of the user's line of sight leaving the first attention point.
  • the processor determining the user's attention point based on the target image specifically includes: determining the user's attention point based on the target image, and detecting a user action; When the point of attention satisfies the preset condition, adjusting the focal length of the camera with the point of attention of the user as the focus point, and collecting the first preview image through the camera, specifically includes: the detected target image
  • the user's action is a setting action
  • the focus of the camera is adjusted with the user's current attention point as the focus point, and the first preview image is collected through the camera.
  • the setting action includes pupil dilation, "nodding” ” and “OK” gestures.
  • the electronic device can sense the user's action, indicating that the user is paying close attention to the current attention point object, and the obtained image is more in line with the user's intention, and the user's manual or other operations are not required, which improves the user experience.
  • the target image may include the user's eyes, may also include the user's face, and may also include the user's body parts.
  • the determining, by the processor, the user's attention point based on the target image specifically includes: at a first moment, determining based on the target image that the user is detected to be at the first attention point a duration; when the user's attention point satisfies the preset condition, using the user's attention point as the focus point, adjusting the focal length of the camera, and capturing the first preview image through the camera , specifically comprising: when it is detected that the first duration is greater than a fourth duration threshold, using the first attention point as a focus point, adjusting the focal length of the camera, and capturing the first preview image through the camera;
  • the processor is further configured to determine, based on the target image, a second duration during which the user is detected at a second attention point at a second moment, and the first attention point and the second attention point are at different positions.
  • the second moment is after the first moment; when it is detected that the second duration is greater than the fourth duration threshold, the second attention point is used as the focus point, and the focal length of the camera is adjusted again, by The camera captures the second preview image.
  • each focusing position only needs to adjust a small motor distance each time, achieving smooth focusing, and saving money while improving the imaging quality of the image. reduce the resource consumption of electronic devices.
  • the first duration may be the duration of the user's gaze on the first attention point
  • the second duration may be the duration of the user's gaze on the second attention point.
  • the processor determining the user's attention point based on the target image specifically includes: determining the position of the user's attention point based on the target image; When the preset condition is met, adjust the focal length of the camera with the user's attention point as the focus point, and collect the first preview image through the camera, specifically including: at the position of the user's attention point In the case of the intersection position of multiple objects in the target picture, the processor matches the objects in the target picture based on the user voice information; when the processor matches the first focus object, the The first focus object is the focus point, adjust the focal length of the camera, and collect the first preview picture through the camera, and the first focus object is a scene or a person among the multiple objects in the target picture . In this way, the object that the user pays attention to can be determined more accurately, and the object can be focused on, thereby improving the accuracy of focusing.
  • the processor determines the user's attention point on the target image, specifically including: the position of the user's attention point determined based on the target image, and the position of the attention point The type of object; when the user's attention point satisfies the preset condition, using the user's attention point as the focus point, adjusting the focal length of the camera, and collecting the first preview image through the camera, specifically Including: when the attention point is at the intersection of multiple types of objects in the target picture, using the focus object of the same type as the focus object last time as the focus point, adjusting the focal length of the camera, and collecting the information collected by the camera to the first preview screen. In this way, the object that the user pays attention to can be determined more accurately, and the object can be focused on, thereby improving the accuracy of focusing.
  • the first preview image includes a focus frame, and the attention point is a center position of the focus frame. In this way, the accuracy of the focus position in the preview image can be guaranteed.
  • the first preview image includes a focus frame, and the center of the subject of the attention point is the center of the focus frame. In this way, the accuracy of the focus position in the preview image can be guaranteed.
  • the electronic device determining the user's attention point based on the target image includes:
  • is the modulus of the corneal center O c and the attention point P g , expressed as:
  • the eyeball structural parameters of the first person include: eyeball center O e , Kappa angle is the angle between the actual receiving angle and the optical axis, the horizontal component ⁇ of the Kappa angle, the vertical component ⁇ of the Kappa angle, R is the rotation parameter, t is the translation parameter, Be the coordinates of the head coordinate system, V s is the unit normal vector of the plane where the display screen of the electronic device is located;
  • V o is the unit vector of the optical axis, expressed as:
  • V g is the visual axis unit vector, expressed as:
  • the present application provides an electronic device, including a touch screen, a camera, one or more processors, and one or more memories.
  • the one or more processors are coupled with a touch screen, a camera, and one or more memories, and the one or more memories are used to store computer program codes.
  • the computer program codes include computer instructions.
  • the present application provides an electronic device, including: one or more functional modules.
  • One or more functional modules are used to execute the focusing method in any possible implementation manner of any of the above aspects.
  • the embodiment of the present application provides a computer storage medium, including computer instructions, and when the computer instructions are run on the electronic device, the above-mentioned apparatus executes the focusing method in any possible implementation of any one of the above-mentioned aspects. .
  • an embodiment of the present application provides a computer program product, which, when the computer program product is run on a computer, causes the computer to execute the focusing method in any possible implementation manner of any one of the above aspects.
  • FIG. 1A is a schematic cross-sectional view of an eyeball structure model provided in an embodiment of the present application
  • FIG. 1B is a schematic diagram of an application scenario of a focusing method provided by an embodiment of the present application
  • FIG. 9 is a schematic flowchart of a focusing method provided by an embodiment of the present application.
  • Fig. 10 is a schematic diagram of a 3D eye model provided by an embodiment of the present application.
  • Fig. 11 is a schematic diagram of conversion from a head coordinate system to a world coordinate system provided by an embodiment of the present application.
  • Fig. 12 is a schematic diagram of the relationship between the visual axis and the optical axis provided by the embodiment of the present application;
  • FIGS. 13A-13C are schematic diagrams of some user interfaces provided by the embodiments of the present application.
  • FIG. 14 is a schematic structural diagram of an electronic device 100 provided by an embodiment of the present application.
  • Focusing the process of changing the distance between the lens and the imaging surface (image sensor) through the camera focusing mechanism to make the subject image clear is focusing.
  • Auto focus of mobile phone is to use the principle of object light reflection, the reflected light is received by the image sensor (CCD or CMOS) on the camera in the mobile phone, and the original image is obtained, and the original image is calculated and processed to drive
  • the way the electric focusing device focuses is called autofocus.
  • it is a set of data calculation methods integrated in the mobile phone ISP (Image Signal Processor).
  • the viewfinder captures the most original image, the image data will be sent to the ISP as the original data.
  • the ISP will analyze the image data, get the distance of the lens that needs to be adjusted, and then drive the voice coil motor to adjust , making the image clear - this process is reflected in the eyes of mobile phone users, which is the autofocus process.
  • the lens is locked in the voice coil motor, and the position of the lens can be changed by driving the voice coil motor.
  • phase focus there are three ways to realize the auto focus of the mobile phone: phase focus, contrast focus and laser focus.
  • the following three ways of auto focus are introduced respectively.
  • Phase focusing is to reserve some shaded pixels on the photosensitive element, which are specially used for phase detection, and determine the offset of the lens relative to the focal plane through the distance between pixels and its changes, so that according to the The offset adjusts the lens position to achieve focus.
  • the principle of phase focusing is to set phase difference detection pixels on the photosensitive element (such as an image sensor).
  • the phase difference detection pixels cover the left half or the right half of the pixels, which can detect the amount of light and other information on objects in the scene.
  • the phase difference is the phase difference between the optical signals received by the pixels on the left and right sides.
  • the electronic device calculates the correlation value through the images obtained on the left and right sides of the phase difference detection pixel, and obtains a focusing function, so that the phase difference and offset are equal to One-to-one relationship.
  • Contrast focusing it is assumed that after the focus is successful, the contrast of adjacent pixels is the largest. Based on this assumption, a focus point is determined during the focusing process, and the contrast between the focus point and the adjacent pixels is judged, and the sound is moved repeatedly. After turning the motor, a local gradient maximum is obtained, and the focus is completed.
  • the electronic device emits an infrared laser to the subject to be photographed (focusing subject) through an infrared laser sensor. When the laser reaches the focusing subject, it will return to the original path.
  • the electronic device calculates the distance from the electronic device to the focus according to the round-trip time of the infrared laser. The distance of the main body, further, the electronic device drives the voice coil motor to adjust the position of the lens based on the distance.
  • the voice coil motor is mainly composed of a coil, a magnet group and shrapnel.
  • the coil is fixed in the magnet group by the upper and lower shrapnel.
  • the coil When the coil is energized, the coil will generate a magnetic field.
  • the coil magnetic field interacts with the magnet group, and the coil will Moving upward, the lens locked in the coil moves together.
  • the power is cut off, the coil returns under the elastic force of the shrapnel, thus realizing the autofocus function.
  • Figure 1A is a schematic cross-sectional view of an eyeball structure model provided in the embodiment of the present application.
  • the eyeball includes a cornea 1, an iris 2, a pupil 3, a lens 4, a retina 5, a cornea center 6, and an eyeball center 7, wherein :
  • the cornea 1 is the transparent part of the front of the eyeball, and is the first pass through which light enters the eyeball.
  • the center of the outer surface of the cornea 1 is about 3mm in a spherical arc, called the optical zone, and the radius of curvature around it gradually increases, showing an aspherical shape.
  • the cornea 1 is assumed to be a spherical arc surface.
  • the iris 2 is a disc-shaped membrane with a hole called the pupil 3 in the center. If the light is too strong, the sphincter muscle in the iris 2 contracts, and the pupil 3 shrinks; when the light becomes weak, the dilator muscle of the iris 2 contracts, and the pupil 3 becomes larger.
  • the pupil 3 is the small circular hole in the center of the iris in the animal or human eye, which is the passage for light to enter the eye.
  • the retina 5 is the photosensitive part of the eyeball, and external objects are imaged on the retina 5 .
  • FIG. 1B shows a schematic diagram of an application scenario involved in a focusing method provided by an embodiment of the present application.
  • the user opens the camera application program of the electronic device 100, uses the camera to capture the scene 20, and the electronic device 100 displays the interface 100 as shown in FIG. 1B.
  • the interface 100 may include a preview screen 101 , a menu bar 102 , an album 103 , a shooting control 104 , and a switching camera control 105 .
  • the menu bar 102 may include options such as aperture, night scene, portrait, photo, video, professional, and more, and the user may select a photo mode according to his or her needs.
  • the car 106 can be selected as the focus point.
  • the electronic device receives the user's After the operation of selecting the focus point, in response to the operation, the focusing system performs focusing based on the focus point selected in the preview image 101, so that the image in the region of interest is clearer.
  • Determining the focus point is an important step in the focusing process of the electronic device. Accurate position information of the focus point can improve the quality of captured images and meet the needs of users. In the prior art, the determination of the focus point and focusing based on the focus point can be performed in the following ways:
  • the operation of selecting the focus point may be a touch operation.
  • the user is interested in a scene (such as the car 106 in the figure) in the preview screen 101, and at this time, the car 107 can be selected as the focus point, and the user can focus on the car 107 in the preview screen 101.
  • the electronic device 100 receives the touch operation, and in response to the operation, obtains the position information of the car 107 in the preview screen. Further, the electronic device 100 focuses based on the location information of the car 107 in the preview picture, and obtains a clear preview picture of the area where the car 107 is located.
  • the electronic device can also display an interface 200 as shown in FIG. 2B. As shown in FIG.
  • the interface 200 includes a preview screen 202, and the preview screen 202 can display an area of interest 201, wherein the area of interest
  • the frame 201 can be determined according to the position of the focus point car 106, and can be used to indicate the area where the subject of interest of the user is located. It can be a dotted frame 201 as shown in FIG. 2B, or other shapes such as a rectangular frame, a circle Boxes, triangles, etc. to indicate the area of interest.
  • the electronic device 100 may identify the specific subject according to the acquired features of the specific subject in the preview image, and use the position of the specific subject in the preview image as the focus point position, and then focus based on the position of the focus point. For example, face focus, the electronic device 100 can recognize the face in the preview image, take the position of the face in the preview image as the position of the focus point, and then focus based on the position of the face in the preview image to obtain the preview image A clear image of the area where the face is located in the middle.
  • the subjects people are interested in are very different, and the types of subjects that the electronic device 100 can identify are limited.
  • the subjects recognized by the electronic device 100 on the preview screen may also be It is not the subject that the user is interested in. Therefore, this method cannot determine the focus point, and thus cannot meet the user's requirement for the captured image, that is, the image of the area where the subject of interest to the user is located is clear.
  • an embodiment of the present application provides a focusing method, which includes: an electronic device (such as a mobile phone, a tablet computer, etc.) displays a preview image through a display screen, and the preview image can be It is the image collected by the electronic device through the rear camera or the front camera; the electronic device can detect the user's first operation; in response to the first operation, the electronic device obtains the user's target image through the front camera; the electronic device based on the user's The target image determines the position of the user's attention point on the preview screen displayed by the electronic device, and the area where the attention point is located is the user's area of interest; the electronic device collects the preview screen based on the position of the attention point.
  • an electronic device such as a mobile phone, a tablet computer, etc.
  • the electronic device can also focus based on the acquired position of the user's attention point in the preview image, which is easy to operate.
  • the electronic device obtains the user's target image in real time through the front camera and determines the position of the attention point, providing a stable focus point input source, and the electronic device can continuously focus, thereby continuously obtaining clear images of the user's interest area.
  • the second operation is used to trigger the attention point focusing function
  • the target image is an image collected by the electronic device through the front camera, and may be an image including the user's face and eyes.
  • a focusing method provided by the embodiment of the present application can be applied in the shooting scene shown in FIG.
  • a user interface involved in a focusing method provided in an embodiment of the application may be executed by the electronic device 100 in FIG. 1B .
  • the electronic device includes a front camera 30 shown in FIG. 3 and a rear camera (not shown in FIG. 3 ).
  • the user turns on his/her electronic device, so that the display screen of the electronic device displays the desktop of the electronic device.
  • FIG. 3 is a schematic diagram of an interface of a user's electronic device provided in an embodiment of the present application. and menu bar 32 .
  • the status column 31 includes the operator, current time, current geographic location and local weather, network status, signal status and power supply.
  • the menu bar 32 includes icons of at least one application program, each application program has a corresponding application program name below the icon, for example: camera 110, mailbox 115, cloud sharing 116, memo 117, settings 118, gallery 119, phone 120, Short message 121 and browser 122. Wherein, the positions of the icon of the application program and the name of the corresponding application program can be adjusted according to the preference of the user, which is not limited in this embodiment of the present application.
  • FIG. 3 is an exemplary display of the embodiment of the present application, and the schematic diagram of the interface of the electronic device may also be in other styles, which is not limited in the embodiment of the present application.
  • the user can input an operation for the camera 110 in the menu bar 32, and the electronic device receives the operation, and further responds to the operation, and displays an interface as shown in FIG. 4A
  • the interface is A schematic interface diagram of a camera of an electronic device provided in an embodiment of the present application.
  • the interface includes a camera menu bar 41, a preview screen 42, an attention focus control 401, an album 40A, a shooting control 40B, a conversion camera control 40C, a smart vision switch 40D, an artificial intelligence (AI) shooting switch 40E, A flash switch 40F, a filter switch 40G, and a setting control 40H. in:
  • the camera menu bar 41 can include the options of multiple camera modes such as aperture, night scene, portrait, taking pictures, video recording, professional, and more. Different camera modes can realize different shooting functions.
  • the "triangle" in the camera menu bar 41 points to The camera mode of is used to indicate the initial camera mode or the camera mode selected by the user. As shown in 402 in FIG. 4A, the "triangle" points to "photographing", indicating that the camera is currently in the camera mode.
  • the preview image 42 is an image collected by the electronic device in real time through the front camera or the rear camera.
  • the attention point focusing control 401 can be used to trigger focusing based on the attention point.
  • the attention point is the position where the user's eye sight falls on the display screen of the electronic device, that is, the position where the user's eye sight falls on the preview image 42 .
  • the photo album 40A is used for the user to view the pictures and videos that have been taken.
  • the shooting control 40B is configured to make the electronic device take pictures or videos in response to user operations.
  • the switching camera control 40C is used to switch the camera for capturing images between the front camera and the rear camera.
  • the smart vision switch 40D is used to turn on or off the smart vision.
  • the smart vision can be used for object recognition, shopping, translation, and code scanning.
  • An artificial intelligence (AI) shooting switch 40E is used to turn on or off the AI shooting.
  • the flash switch 40F is used to turn on or turn off the flash.
  • the filter switch 40G is used to turn on or off the filter.
  • the setting control 40H is used to set various parameters when collecting images.
  • the operation to trigger focusing based on the attention point may be a touch (click) operation input on the attention point focus control 401 as shown in FIG. 4A .
  • the user can input an operation for the camera 110 in the menu bar 32, and the electronic device receives the operation, and further responds to the operation, and displays an interface as shown in FIG. 4B
  • the interface is A schematic diagram of an interface of a camera of another electronic device provided in an embodiment of the present application.
  • the interface includes a camera menu bar 43, a preview screen 44, an album 40A, a shooting control 40B, a conversion camera control 40C, a smart vision switch 40D, an artificial intelligence (AI) shooting switch 40E, a flash switch 40F, and a filter switch 40G, and setting control 40H.
  • AI artificial intelligence
  • the “triangle" points to "photographing", indicating that the camera is currently in a photographing mode.
  • the user wishes to select "More” in the camera menu bar 43.
  • the operation of selecting "More” in the camera menu bar 43 may be to input a sliding operation for the camera menu bar 43. Specifically, the sliding operation may be to Slide in the direction indicated by the arrow on the camera menu bar 43 in 4B.
  • the above-mentioned operation of selecting "more” in the camera menu bar 43 may also be a touch operation.
  • the electronic device displays an interface as shown in FIG. 4C , which includes a camera menu bar 45 and a more menu bar 46 . in:
  • the “triangle" in the camera menu bar 45 points to "more", and the more menu bar 46 can include short video control, professional video control, skin beautification control, attention focus control 405, 3D dynamic panorama control, panorama control, HDR Controls, super night scene controls, time-lapse photography, etc.
  • the operation to trigger focusing based on the attention point may be a touch (click) operation input on the attention point focus control 405 as shown in FIG. 4C .
  • the operation to trigger focusing based on the attention point may be a touch operation input on the camera 110 in the menu bar 32, that is, after the camera is turned on, the electronic device starts focusing based on the attention point. the process of.
  • the operation of triggering focusing based on the attention point is not limited to the methods provided in the above embodiments, and may also be operations such as voice control.
  • voice control as an example, after the user turns on the camera, the electronic device collects the voice through the microphone, and can recognize whether the voice includes voice control information such as "turn on the attention focus" and "focus". The focus is based on the point of attention in the trigger.
  • FIG. 4A-FIG. 4C are exemplary representations of the embodiment of the present application, and the schematic interface diagrams of the electronic device may also be in other styles, which are not limited in the embodiment of the present application.
  • the electronic device When an operation that triggers focusing based on the attention point is detected, the electronic device responds to the operation by acquiring the target image of the user through the front camera to determine the position information of the user's gaze on the preview screen displayed by the electronic device, that is, the user location information of attention points.
  • the location information of the attention point may include the coordinates of the attention point on the plane coordinate system (ie, the display screen coordinate system) where the preview image is located.
  • the process for the electronic device to determine the location information of the user's attention point may refer to the relevant description of S901-S907 in FIG. 9 below.
  • the electronic device when the electronic device detects that the user has input an operation on the focus point control 401 in the interface shown in FIG. 4A , in response to the operation, the interface shown in FIG. 4D may be displayed.
  • the force point focus control 401 changes from the first color (for example, gray) to the second color (for example, black), indicating that the attention point-based focus function has been turned on.
  • the display form of the attention-point focusing control indicating to enable or disable the attention-point-based focusing function is not limited to a color change, and may also be a display form of different transparency.
  • the electronic device displays an interface as shown in FIG.
  • the interface includes prompt information, such as "please look at the dot below", for prompting
  • the user looks at the calibration point 408 in the interface.
  • the calibration point 408 is used by the electronic device to determine the user's face model and eyeball structure parameters.
  • the specific process description for the electronic device to determine the user's face model and eyeball structure parameters can be found in the following Related description of S903 in FIG. 9 .
  • the electronic device when the electronic device detects that the user has input an operation on the attention focus control 405 in the interface as shown in FIG. 4C , in response to the operation, the interface as shown in FIG. 4E may also be displayed.
  • the interface includes an icon 406 for indicating that the focusing function based on attention points has been enabled, and a control 407 for disabling the focusing function based on attention points.
  • the user may input an operation on the control 407 to close the function.
  • the electronic device displays an interface as shown in FIG.
  • the interface in response to the above-mentioned operation on the focus control input of the attention point, and the interface includes prompt information, such as "please look at the dot below", for prompting
  • the user looks at the calibration point 409 in the interface.
  • the calibration point 409 is used by the electronic device to determine the user's face model and eyeball structure parameters.
  • the specific process description for the electronic device to determine the user's face model and eyeball structure parameters can be found in the following Related description of S903 in FIG. 9 .
  • the preview image displayed by the electronic device may be an image collected by a rear camera, or an image collected by a front camera.
  • the user turns on the camera and uses the front camera of the electronic device to take a selfie, and the electronic device displays the preview screen captured by the front camera, and the electronic device responds to triggering an operation based on attention points, and determines the user's target image through the obtained user's target image.
  • the position information of the attention point on the display screen is further used for focusing according to the position information of the attention point.
  • the target image of the user obtained by the electronic device may be obtained from the preview screen, or directly from the image collected by the front camera (that is, the image collected by the front camera is divided into two channels, and one channel is used to pass through the display screen. displayed as a preview screen, and another way is used to determine the position information of the user's attention point).
  • the electronic device performs focusing based on the determined location information of the user's attention point.
  • the focusing process for the attention point can be triggered through the following implementation methods. It should be noted that several implementations of triggering the focusing process for the point of attention will be described by taking the preview image as an image collected by the rear camera of the electronic device as an example.
  • Implementation mode 1 When the user's gaze duration on an attention point satisfies a preset condition, the electronic device triggers a focusing process on the attention point.
  • Gaze duration is the period of time between when the user's gaze falls on the attention point on the display screen and leaves the attention point.
  • the electronic device focuses on the camera that captures the preview image.
  • the electronic device places the focusing frame based on the position information of the attention point, and then executes the focusing process according to the phase difference data in the focusing frame, where the phase difference data is the phase difference in the focusing frame.
  • the focusing process includes the electronic device determining the distance and direction of the lens to be adjusted according to the phase difference data in the focusing frame, and driving the voice coil motor to adjust the position of the lens in the camera, that is, changing the distance (image distance) between the lens and the image sensor, so that the user pays attention
  • the image of the area where the force point is located is clear.
  • the electronic device determines the position information of the attention point where the user's line of sight falls on the preview screen at each moment from the user's target image acquired in real time through the front camera, and records the user's gaze duration for each attention point.
  • the preset condition is that the fixation duration for the attention point is not less than the first duration threshold.
  • the electronic device performs a focusing process.
  • For the focusing process refer to S906- in FIG. 9 below.
  • the electronic device performs focusing based on the position information of the first attention point.
  • the electronic device starts counting when it detects that the user's line of sight falls on the first attention point, until the electronic device determines that the user's attention point changes, that is, the determined position information of the attention point is not the position information of the first attention point.
  • the electronic device restarts timing.
  • the electronic device may display a timer, and the timer may be used to indicate the length of time the user's gaze falls on the attention point.
  • the preset condition for the electronic device to perform focusing is to detect that the duration of the user's gaze on the first attention point is not less than 3s.
  • the electronic device determines the position information of the user's first attention point, as shown in FIG. 5A, if the user's first attention point is the position 501 on the preview screen, at this time, the electronic device displays a timer 502 for displaying the user's The duration of the line of sight falling on the position 501, as shown by the timer 502 in FIG. 5A , starts from 0.0s.
  • the electronic device detects that the value in the timer changes to 3.0s as shown in 503 as shown in Figure 5B, that is, the electronic device determines that the user's line of sight falls on the first point of attention for 3.0s, and executes based on the first The focusing process of the location information of the attention point.
  • the timer may also be a countdown timer.
  • the electronic device determines the position information of the user's first attention point, it displays a countdown timer 503 as shown in FIG. 5B , and the countdown timer displays 3.0s.
  • the electronic device detects that the value of the countdown timer changes to 0.0s as shown at 502 in FIG. 5A , it executes a focusing process based on the position information of the first attention point.
  • the electronic device detects that the user’s attention point changes from position 501 in FIG. 5A to position 504 in FIG. Start timing, and record the duration of the user's focus on position 504 .
  • the preset condition is that after the user's gaze duration on the first attention point is not less than the second duration threshold, and the duration of the user's gaze away from the first attention point is less than the third duration threshold, the electronic The device performs focusing based on the location information of the first attention point. Specifically, at a certain moment after the user's gaze duration on the first attention point is not less than the second duration threshold, the electronic device detects that the user's gaze leaves the first attention point, and the user's gaze leaves the first attention point for a first duration. Then it falls on the first point of attention.
  • the electronic device determines whether the first duration is less than the third duration threshold, and when the first duration is smaller than the third duration threshold, the electronic device does not change the focus information based on the position information of the first attention point, that is, does not adjust the voice coil motor The process of focusing; when the first duration is not less than the third duration threshold, and the first duration is not less than the second duration threshold, the electronic device performs a focusing process based on the position information of the second attention point, wherein the second attention point It is the position where the user's eyes fall on the preview screen during the first duration.
  • the user's attention point determined by the electronic device according to the acquired target image of the user may be on the preview screen (such as the above-mentioned second attention point) or not.
  • the electronic device detects that the user's attention point is not on the preview screen the electronic device does not perform a focusing process based on the attention point.
  • the second duration threshold is set to 3s
  • the third duration threshold is set to 1s.
  • the electronic device detects that the user's attention point is on position 501 in FIG. 5D , and at a certain moment after the gaze duration of position 501 is not less than 3s (as shown by timer 502 in FIG. 5D , the duration is 4.0s), the electronic device The device detects that the user's attention point changes to position 506 in FIG. 5D , and after 0.5s of gazing at position 506 , the user's attention point returns to position 501 .
  • the electronic device detects that the user's attention point leaves the location 501 for 0.5s less than the third duration threshold 1s, and continues to perform focusing based on the location information of the location 501 . It should be understood that after the electronic device detects that the user's attention point is fixed at position 501 for a duration of not less than 3 seconds, it places a focus frame based on the position information of position 501 and focuses on it. At the third duration threshold, focusing is still performed based on the position information of the position 501 without changing the focusing information, so the electronic device does not need to adjust the focusing process of the voice coil motor, thus reducing resource consumption of the electronic device.
  • the preset condition is that the user's line of sight leaves the first attention point and falls on the second attention point, and the user's physiological characteristics change when falling on the second attention point.
  • the electronic device focuses based on the location information of the first attention point.
  • the electronic device detects that the user's line of sight leaves the first attention point and falls on the second attention point.
  • the electronic device determines that the physiological characteristics of the user's eyes have changed through the collected target image of the user. For example, the pupil is dilated, and the electronic device performs a process of focusing based on the position information of the second attention point.
  • the electronic device When the electronic device does not detect that the user's physiological characteristics of the eyes change, the focusing information based on the position information of the first attention point is not changed, that is, the process of adjusting the voice coil motor for focusing is not performed. It should be understood that, in some implementations, when the electronic device detects a change in the user's physiological characteristics, it may place a focusing frame based on the position information of the user's attention point at this time, and perform focusing without judging whether the gaze duration satisfies the condition.
  • the third duration threshold is set to 3s.
  • the electronic device detects that the user's attention point is on position 501 in FIG. 5E , and at a certain moment after the fixation duration at position 501 is not less than 3s.
  • the duration shown by timer 502 in Figure 5E is 4.0s
  • the electronic device detects that the user's attention point changes to the position 504 in Figure 5E, and the electronic device recognizes that the pupil size in the target image of the user corresponds to Compared with the size of the pupil when the attention point is at position 501 , it can be determined that the pupil is dilated.
  • the electronic device places a focusing frame based on the position information of position 504 and performs focusing.
  • first duration threshold, the second duration threshold, and the third duration threshold may or may not be the same value, and this embodiment of the present application does not limit the value of the duration threshold.
  • the electronic device may perform smooth focusing according to a change of the user's attention point. Specifically, when the electronic device detects that the user's gazing time for a point of attention is not less than the fourth duration threshold, it executes placing the focusing frame based on the position information of the point of attention, and then drives the voice coil motor to adjust the distance between the lens and the image sensor. The process of focusing at a distance. For example, when there are multiple scenes or characters in the preview screen, the user may be interested in each scene or character, and the user's attention will shift from one scene or character to other adjacent scenes or characters.
  • the device When the device detects that the user's attention point is fixed on the first scene or person for a duration not less than the fourth duration threshold, it will focus based on the location information of the attention point (the location information of the first scene or person), and when it detects the user's attention When the power point is transferred to the second scene or character adjacent to the first scene or character, and the gaze duration on the second scene or character is not less than the fourth duration threshold, the position information (second location information of the scene or person) to focus. When the user's attention is shifted from one position to its adjacent position, the electronic device drives the voice coil motor to adjust the position of the lens, and only a small distance is required to move the lens at this time.
  • the fourth duration threshold is 2s.
  • the user's attention is first on the character 601 in FIG. 6A , then shifts to the character 603 adjacent to the character 601 , and finally shifts from the character 603 to the adjacent character 605 .
  • the electronic device detects the change process of the user's attention point as shown in Figures 6B-6D.
  • the electronic device detects that the user's attention point is fixed on the character 601 for 3.0s, as shown by the timer 602 in Figure 6B.
  • the electronic device determines that the distance to adjust the lens is the first distance based on the position information of the person 601, and further drives the voice coil motor to adjust the first distance to move the lens to the first position.
  • the electronic device determines the distance to adjust the lens as the second distance based on the position information of the person 603, and further drives the voice coil motor to adjust the second distance to move the lens to the second position.
  • the electronic device When the electronic device detects that the user's focus is shifted from the character 603 to its adjacent character 605, a timer 606 as shown in Figure 6D is displayed, and the timer 606 shows that the gaze duration is 3.0s and not less than the fourth duration threshold of 2s , the electronic device determines, based on the position information of the person 605, that the distance to adjust the lens is a third distance, and further drives the voice coil motor to adjust the third distance, and moves the lens to a third position.
  • the person 603 is adjacent to the person 601
  • the second distance between the second position of the lens and the first position is relatively small
  • the person 605 is adjacent to the person 603
  • the third distance between the third position of the lens and the second position is relatively small Small
  • the electronic device when the user uses the continuous shooting mode or records a video, the electronic device detects the location of the user's attention point, and when it detects that the attention point shifts to adjacent scenes or characters in successive frames of images, The electronic device performs a smooth focusing process to obtain multi-frame continuous shooting photos or videos that meet the user's needs. Each picture obtained by continuous shooting or each frame of recorded video is obtained by focusing based on its corresponding attention point.
  • the user can long press the shooting control 40B as shown in FIG. 6A to start the continuous shooting mode.
  • a step-by-step focusing method may be adopted. Specifically, the distance between the attention points of two or more frames of images before and after the continuous shooting of photos or videos can be detected.
  • the electronic device can sequentially increase the distance by the first step (for example, 30 ⁇ m) ) Push the voice coil motor to make the lens reach the corresponding position, realize smooth focusing of multiple frames of images, and improve the focusing effect.
  • the electronic device after the electronic device detects the location of the user's attention point, it determines an ROI frame based on the location information of the attention point.
  • the size of the ROI frame can be preset, and after the location information of the user's attention point is determined, the ROI frame is displayed with the attention point as the geometric center of the ROI frame.
  • the electronic device can intelligently detect the images around the attention point, for example, through face recognition, recognize the face, and determine the face frame according to the size of the face, also known as the region of interest frame, Exemplarily, as shown in FIG. 6E , FIG. 6F and FIG. 6G , the electronic device determines three ROI frames 607 , 608 , and 609 based on the location information of the user's attention points at three moments and face recognition, respectively.
  • the electronic device may perform focusing in combination with the user's physiological feature information, where the physiological feature information may be the user's expression or action, and is used to trigger the focusing process of the electronic device.
  • the electronic device can obtain the user's image through the front camera, and the electronic device can determine whether the facial expression, head movement, and body movement in the user's image match a preset expression or action command.
  • the electronic device determines the position of the focus frame based on the position information of the attention point, and then performs the focusing process.
  • the electronic device detects the confirmation action of the user such as "nodding" or "OK" gesture, or detects that the user's pupils dilate, the electronic device focuses.
  • the electronic device can store preset facial expressions or action instructions.
  • Implementation mode 3 The electronic device performs focusing in combination with the voice information of the user.
  • the electronic device may perform focusing in combination with voice information of the user. Specifically, the electronic device responds to the above operation of triggering focus based on the attention point, acquires the sound in the environment in real time through the microphone, and recognizes the voice information in the sound in the environment.
  • the electronic device determines the position information of the user's attention point, and detects that the user's voice information includes the information of a scene or person in the preview screen (that is, the first focus object), within the preset range where the attention point is located A scene or a person (also referred to as a subject) in the preview image is identified in the area, and further, the electronic device places a focus frame for focusing based on the position information of the subject.
  • the area within the preset range where the attention point is located may be an area within a range of preset pixel height and preset pixel width centered on the attention point.
  • FIG. 7A is an interface of an electronic device provided by the embodiment of the present application.
  • the microphone icon 701 is used to turn on or off the microphone, which can indicate the status of the microphone being turned on or off.
  • 701 represents The microphone is in an on state
  • 702 in FIG. 7B indicates that the microphone is in an off state.
  • the electronic device determines the position of the user's attention point as position 703. During the period when the microphone is turned on, the electronic device detects that the sound collected by the microphone includes the voice information of "car".
  • the electronic device recognizes that the user's attention point is within the preset range In the image in the area, the car 704 is recognized, and the electronic device determines the focus point as the car 704 in FIG. 7A , and further, places a focus frame based on the position information of the car 704 for focusing.
  • the electronic device may display the ROI frame according to the size of the car 704, as shown in 705 in FIG. 7C.
  • the electronic device After determining the location information of the user's attention point, the electronic device identifies the subject (scenery or person) at the location of the user's attention point (position A), and then focuses based on the location information of the subject at position A.
  • position B When the electronic device detects that the user's attention is shifted to another position (position B), the electronic device recognizes the scene or person within the preset range of position B, and when it recognizes that the subject at position A is of the same type, it is recorded as The subject at position B is the subject, and position information of the subject is determined. Further, the electronic device determines a focus frame based on the position information of the subject at position B, and then performs a focusing process.
  • the same category refers to scenes or people with the same characteristics, such as human faces, buildings, roads, vehicles, plants, animals, etc. It should be understood that the location information of location A and the location information of the subject at location A may be the same , there may also be a certain deviation.
  • the electronic device determines that the user's attention point is at position 801 in FIG. 8A , and recognizes that the subject at position 801 is a human face 802 . location information to focus.
  • the electronic device detects that the user's attention is shifted to the position 803 in FIG. 8B , it identifies whether there are subjects of the same type as the face 802 around the position 803 .
  • the electronic device recognizes the human face 804 within the preset range of the position 803 , and further, focuses based on the position information of the human face 804 .
  • the electronic device when the electronic device detects the situation described in any one of the foregoing implementation manners 1 to 4, it executes a focus-based process.
  • a focusing method provided by an embodiment of the present application will be described in detail below with reference to the accompanying drawings.
  • the method provided in the embodiment of the present application may be applied in a scene where the electronic device focuses as shown in FIG. 1B , and the method may be executed by the electronic device 100 in FIG. 1B .
  • the method may include but not limited to the steps shown in Figure 9:
  • the electronic device receives a user's first operation for triggering an attention-based focusing function.
  • the first operation may be the operation described above to trigger focusing based on the point of attention.
  • the first operation reference may be made to the relevant description above, and details will not be repeated here.
  • the electronic device acquires the target image through the front camera in response to the above first operation.
  • the target image is an image collected by the electronic device through a front-facing camera, and may be an image including a user's face and eyes.
  • the electronic device after the electronic device displays the preview image, the electronic device receives the first operation, and then performs S902.
  • the first operation is a touch operation input to the camera 110 in FIG. 3 above
  • the electronic device receives the first operation, and in response to the operation, displays a preview image captured by the default camera, and Obtain the target image through the front camera.
  • the electronic device determines position information of the user's attention point on the display screen of the electronic device based on the target image.
  • the electronic device processes the acquired target image through an image processing method to obtain image parameters and eye structure parameters of the user. Further, the image parameters and eye structure parameters obtained by the electronic device are used by the user to The location information of the attention point on the display screen of the electronic device.
  • FIG. 10 is a schematic diagram of a 3D eye model provided by an embodiment of the present application.
  • the geometric center of the display screen of the electronic device is the origin of the world coordinate system
  • P i is the center of the iris
  • O c is the center of the cornea
  • O e is the center of the eyeball
  • V o is the unit vector of the optical axis
  • V g is the visual axis unit vector
  • the visual axis of the eye is defined as the line from the corneal center Oc to the attention point Pg on the plane where the display screen of the electronic device is located
  • the optical axis is the line between the eyeball center and the corneal center.
  • the attention point Pg can be expressed as:
  • the eyeball center O e , the optical axis unit vector V o and the visual axis unit vector V g can be determined based on the user's target image, and then c and ⁇ can be obtained and substituted into formula (1) to obtain the position information of the attention point P g .
  • the following describes in detail how the electronic device determines the position information of the user's attention point on the display screen of the electronic device through the target image through steps S903-S905.
  • the electronic device determines image parameters and eyeball structure parameters of the user based on the target image.
  • the image parameters include the conversion relationship (R, t) required for the conversion of the user's eyeball center in the head coordinate system to the world coordinate system where the electronic device is located and the position information of the iris center Pi.
  • the above conversion relationship (R, t) is used To determine the position information of the world coordinate system of the user's eyeball center, there will be different conversion relationships for different head postures of the user.
  • the conversion relationship may include the rotation parameter R and the translation parameter t.
  • the rotation parameter R It can be a rotation matrix
  • the translation parameter t can be a translation matrix.
  • the eyeball structure parameters include the coordinates of the eyeball center Oe in the head coordinate system And the Kappa angle in Figure 10, the Kappa angle is the angle between the actual receiving and the optical axis, including the horizontal component ⁇ and the vertical component ⁇ .
  • the electronic device determines the eyeball structural parameters It can be realized by the calibration method of the center of the eyeball and the calibration method of the Kappa angle.
  • the process for the electronic device to calibrate the eyeball structure parameters may include: the electronic device establishes the user's head coordinate system, and when receiving an operation that triggers the focus of attention, the electronic device responds to the operation by displaying the image shown in Figure 4G or Figure 4G interface, instructing the user to look at a calibration point (such as 408 in FIG. 4G or 409 in FIG. 4G ) displayed on the display screen of the electronic device for a preset duration (for example, 1s). At this time, the electronic device can acquire the image of the user, and Calculate the user's eyeball structure parameters
  • the electronic device determines the structural parameters of the eyeball based on the target image It can also be realized through multiple calibration points. For example, the electronic device sequentially displays multiple calibration points at different positions, and instructs the user to stare at each calibration point for a preset duration. Determine the user's eye structure parameters
  • the electronic device determines the image parameters (R, t, Pi) based on the target image may include the following process:
  • the electronic device uses face recognition technology and eye recognition technology in image processing to respectively recognize the face and eyes in the target image.
  • the electronic device determines the conversion relationship (R, t) between the head coordinate system and the world coordinate system based on the face in the target image.
  • the electronic device includes a sensor that can be used to obtain the position information of the feature points of the face, for example, a Kinect sensor.
  • the electronic device detects the position information of the feature points of the face at time t' through the sensor, and then refers to the The position information of the feature points in the face model and the transformation relationship of the feature points in the face model from the head coordinate system to the world coordinate conversion determine the rotation matrix R( Including yaw angle, pitch angle and roll angle) and translation matrix t, that is, the conversion relationship between the head coordinate system and the world coordinate system (R, t).
  • the human face model is a reference model determined by keeping the human face facing the display screen of the electronic device for a period of time, and using images collected by the front camera.
  • the human face model can be made by facing the calibration point on the display screen of the electronic device (such as 408 in Figure 4G or 409 in Figure 4G), and this process can be related to the above-mentioned determination of the user's eye structure parameter Execute at the same time.
  • the electronic device determines the coordinates of the iris center P i based on the image of the eye in the target image.
  • the electronic device may determine the coordinates of the center of the iris using an image gradient method. Specifically, the electronic device may use the following formula to determine the coordinates of the iris center Pi ,
  • h' is the iris center, that is, P i , h is the potential iris center, d i is the displacement vector, g i is the gradient vector, and N is the number of pixels in the image.
  • the electronic device may scale the displacement vector di and the gradient vector gi into unit vectors to obtain equal weights for all pixels.
  • the electronic device determines the gradient vector of the pixel x i in the eye image; determines the displacement vector between x i and the potential iris center h, and each pixel is a potential iris center, that is, to determine the relationship between the pixel x i and the eye
  • the displacement vector of each pixel in the image of the part determine the dot product of the gradient vector of the pixel point x i and all the displacement vectors of the pixel point x i ; determine the mean value of the dot product of the pixel point x i to be the gradient vector of x i and all
  • the mean value of the dot product of the displacement vector; the pixel point x max with the largest mean value of the dot product is taken as the iris center, and the coordinate of the pixel point x max is the coordinate of the iris center Pi .
  • the electronic device is based on the image parameters (R, t, P i ) and the user's eyeball structure parameters Determine the position information of the eyeball center O e , the optical axis unit vector V o and the visual axis unit vector V g .
  • the head coordinate system is determined by the physiological structure of the human head
  • the world coordinate system is determined by electronic equipment
  • the center of the eyeball has the following transformation in the two coordinate systems:
  • the optical axis unit vector V o can be determined according to the following formula:
  • r e is the radius of the eyeball, usually between 11-13mm
  • V o the trigonometric function expression of the optical axis unit vector
  • the deflection angle of the optical axis unit vector V o is
  • the optical axis unit vector V o is rotated by the Kappa angle to obtain the visual axis unit vector V g :
  • the electronic device determines the position information of the user's attention point P g on the display screen of the electronic device based on the determined position information of the eyeball center O e , the optical axis unit vector V o and the visual axis unit vector V g .
  • the electronic device substitutes the determined O e , V o and V g into the above ⁇ to obtain ⁇ , and further, substitutes the determined O e , V o , V g and ⁇ into the above formula (1),
  • the three-dimensional coordinates (world coordinate system) of the attention point P g of the user's line of sight on the display screen of the electronic device are obtained.
  • the electronic device focuses based on the location information of the attention point.
  • the electronic device first determines the focus frame based on the position information of the attention point, and then uses the phase difference data in the focus frame to focus.
  • the electronic device determines the focus frame based on the position information of the attention point on the display screen.
  • the electronic device can preset the size and number of focus frames. Each focus frame is the same size. After determining the location information of the attention point, the positions of multiple focus frames can be determined based on the attention point. Further, each The focusing frames are placed on their respective corresponding positions. Optionally, the electronic device may display each focusing frame.
  • the electronic device may determine the position of the focus frame centered on the determined attention point, or may determine the position of the focus frame centered on the position of the subject determined based on the attention point.
  • the electronic device may determine the position of the focusing frame centering on the determined attention point.
  • the electronic device detects that the user's gaze duration on the attention point satisfies the preset condition described in the above implementation 1, and determines the focus frame with the attention point as the center s position.
  • the preset conditions described in Implementation Mode 1 refer to the description above, which will not be repeated here.
  • the electronic device may first determine whether the user's physiological feature information matches the preset expression or action instruction, and when the preset expression or action instruction matches, the electronic device The device takes the attention point as the center to determine the placement position of the focusing frame.
  • the electronic device may preset five focusing frames 1301 as shown in FIG. 13A .
  • the position of the attention point determined by the electronic device is position 1306.
  • the electronic device determines five focus frames 1301, 1302, 1303, 1304, 1305 and 1306.
  • Implementation manner B The electronic device determines the position of the focus frame centering on the position of the subject determined based on the attention point.
  • the electronic device may determine the location of the focusing frame in combination with the user's voice information. Specifically, the electronic device responds to the above operation of triggering focus based on the attention point, acquires the sound in the environment in real time through the microphone, and recognizes the voice information in the sound in the environment.
  • the electronic device determines the position information of the user's attention point, and detects that the user's voice information includes information about a scene or person in the preview screen, it recognizes the location of the above-mentioned preview screen in the area within the preset range where the attention point is located.
  • a scene or a person furthermore, the electronic device determines the position of the focus frame with the position of the subject as the center.
  • the area within the preset range where the attention point is located may be an area within a range of preset pixel height and preset pixel width centered on the attention point.
  • the electronic device determines the position information of the user's attention point, it identifies the subject (scenery or person) at the position of the user's attention point (position A), and then proceeds based on the position information of the subject at position A. focus.
  • position B the electronic device detects that the user's attention is shifted to another position (position B)
  • the electronic device recognizes the scene or person within the preset range of position B, and when it recognizes that the subject at position A is of the same type, it is recorded as The subject at position B is the subject, and the position information of the subject is determined. Further, the electronic device determines the focus frame with the position of the subject as the center, and then executes the focusing process.
  • the same category refers to scenes or people with the same characteristics, such as human faces, buildings, roads, vehicles, plants, animals, etc. It should be understood that the location information of location A and the location information of the subject at location A may be the same , there may also be a certain deviation.
  • FIGS. 8A-8B For an exemplary process description of the related electronic device focusing based on the last focused subject, reference may be made to the descriptions of FIGS. 8A-8B in the above-mentioned implementation mode 4, which will not be repeated here.
  • the electronic device may preset five focusing frames 1301 as shown in FIG. 13A .
  • the position of the subject determined by the electronic device based on the position of the attention point is position 1306.
  • the electronic device centers on the position 1306 and determines five focus frames 1301, 1302 as shown in FIG. 13C based on the attention point. , 1303, 1304, 1305 and 1306.
  • the number of preset focus frames may be 5 in the above example, or may be other, such as 9, which is not limited in this embodiment of the present application.
  • the way the electronic device places the focusing frame is not limited to the cross shape shown in FIG. 13C above, and may also be a back shape, a nine-square grid, etc.
  • the embodiment of the present application does not limit the placement method and position of the focusing frame.
  • the position and size of the focus frame can also be determined based on the region of interest frame determined based on the position information of the attention point in FIGS. 6E-6G, 7C, 8A and 8B above, for example , the size of the focusing frame is the ROI box, and the position is placed at the center of the ROI box.
  • the electronic device focuses using the phase difference data in the focusing frame.
  • the electronic device uses phase focusing to focus. Specifically, the electronic device obtains the phase difference data (such as the mean value of the phase difference) of the image in the focus frame through an image sensor with phase difference detection pixels, and then searches the look-up table for the target offset corresponding to the phase difference data, further, the electronic device The device drives the voice coil motor to move the target offset to adjust the position of the lens to achieve focus.
  • the offset includes the distance and direction between the lens and the focal plane
  • the look-up table includes a plurality of phase difference data and their corresponding offsets.
  • the look-up table can be obtained through calibration of a fixed chart, that is, for a fixed chart, Move the lens, that is, change the offset, calculate the phase difference corresponding to each offset, and record it as a lookup table.
  • the electronic device determines that the average value of the phase differences in multiple focus frames is the target phase difference data, and further, the electronic device searches for the target offset corresponding to the target phase difference data, thereby driving the voice coil motor to move Target offset to adjust the position of the lens to focus on the subject at the point of attention.
  • the target phase difference data may also be other values calculated based on the phase differences in multiple focus frames.
  • the target phase difference data is the maximum value of the phase differences in multiple focus frames. Not limited.
  • the target phase difference data may be the average value of the phase differences in the focus frames 1301 , 1302 , 1303 , 1304 , 1305 and 1306 .
  • the electronic device may also use contrast focusing, laser focusing, or combined focusing to focus.
  • Combined focus is any two or any three of phase focus, contrast focus, and laser focus.
  • the focusing manner is not limited in this embodiment of the present application.
  • the exemplary electronic device 100 provided by the embodiment of the present application is introduced below.
  • FIG. 14 shows a schematic structural diagram of the electronic device 100 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
  • processor 110 may include multiple sets of I2C buses.
  • the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flashlight, the camera 193 and the like through different I2C bus interfaces.
  • the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 100 .
  • the I2S interface can be used for audio communication.
  • processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding the analog signal.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is generally used to connect the processor 110 and the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193 , the display screen 194 , the wireless communication module 160 , the audio module 170 , the sensor module 180 and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 can receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also provide power for electronic devices through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be disposed in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be set in the same device.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • CDMA broadband Code division multiple access
  • WCDMA wideband code division multiple access
  • time division code division multiple access time-division code division multiple access
  • TD-SCDMA time-division code division multiple access
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou navigation satellite system beidou navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • the display panel can be realized by using OLED, AMOLED, or FLED, so that the display screen 194 can be bent.
  • a display screen that can be bent is called a foldable display screen.
  • the foldable display screen may be one screen, or may be a display screen composed of multiple screens pieced together, which is not limited here.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • the camera 193 includes a front camera and a rear camera, wherein the front camera can obtain a target image for the user, and the electronic device determines the position information of the user's attention point on the display screen of the electronic device based on the target image.
  • the target image refers to the related description of S902 in FIG. 9 above.
  • the process of determining the position information of the attention point based on the target image refer to the related description of S903-S905 in FIG. 9 above, which will not be repeated here.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • the ISP can also be used to automatically focus on the subject at the attention point based on the position information of the attention point.
  • the ISP can also be used to automatically focus on the subject at the attention point based on the position information of the attention point.
  • For the specific focusing process please refer to the relevant description in S906-S907 in FIG. 9 above, which will not be repeated here. repeat.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG moving picture experts group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the NPU may be used to identify images of human faces and eyes in the target image.
  • the NPU can also be used to identify the subject around the attention point based on the attention point, and further focus based on the position information of the subject.
  • this process please refer to the relevant descriptions of the implementations B and S907 in S906 in FIG. 9 above, here No longer.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 .
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
  • Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
  • Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the receiver 170B can be placed close to the human ear to receive the voice.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a phone call or sending a voice message, the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In some other embodiments, the electronic device 100 may be provided with two microphones 170C, which may also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
  • the earphone interface 170D is used for connecting wired earphones.
  • the earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • pressure sensor 180A may be disposed on display screen 194 .
  • pressure sensors 180A such as resistive pressure sensors, inductive pressure sensors, and capacitive pressure sensors.
  • a capacitive pressure sensor may be comprised of at least two parallel plates with conductive material.
  • the electronic device 100 determines the intensity of pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view short messages is executed. When a touch operation whose intensity is greater than or equal to the first pressure threshold acts on the icon of the short message application, the instruction of creating a new short message is executed.
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of the electronic device 100 around three axes may be determined by the gyro sensor 180B.
  • the gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case.
  • the electronic device 100 when the electronic device 100 is a clamshell machine, the electronic device 100 can detect opening and closing of the clamshell according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 100 emits infrared light through the light emitting diode.
  • Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user is holding the electronic device 100 close to the ear to make a call, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, automatic unlock and lock screen in pocket mode.
  • the ambient light sensor 180L is used for sensing ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket, so as to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to implement a temperature treatment strategy. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 100 may reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to prevent the electronic device 100 from being shut down abnormally due to the low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also known as "touch panel”.
  • the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
  • the bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the human pulse and receive the blood pressure beating signal. In some embodiments, the bone conduction sensor 180M can also be disposed in the earphone, combined into a bone conduction earphone.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the vibrating bone mass of the vocal part acquired by the bone conduction sensor 180M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • the motor 191 can generate a vibrating reminder.
  • the motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the motor 191 may also correspond to different vibration feedback effects for touch operations acting on different areas of the display screen 194 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 is used for connecting a SIM card.
  • the SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards may be the same or different.
  • the SIM card interface 195 is also compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as calling and data communication.
  • the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
  • the term “when” may be interpreted to mean “if” or “after” or “in response to determining" or “in response to detecting".
  • the phrases “in determining” or “if detected (a stated condition or event)” may be interpreted to mean “if determining" or “in response to determining" or “on detecting (a stated condition or event)” or “in response to detecting (a stated condition or event)”.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, DSL) or wireless (eg, infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, solid state hard disk), etc.
  • the processes can be completed by computer programs to instruct related hardware.
  • the programs can be stored in computer-readable storage media.
  • When the programs are executed may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes: ROM or random access memory RAM, magnetic disk or optical disk, and other various media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Selon les modes de réalisation, la présente demande concerne un procédé de focalisation et un dispositif associé. Le procédé comprend les étapes suivantes : en réponse à une première opération d'un utilisateur, un dispositif électronique démarre une photographie, affiche une première interface, puis affiche, dans la première interface, des images de prévisualisation collectées par une caméra ; et le dispositif électronique affiche une première image de prévisualisation dans la première interface, la première image de prévisualisation étant une image de prévisualisation collectée par la caméra en prenant un point d'attention, qui remplit une condition prédéfinie, comme point de focalisation, et le point d'attention étant un point de position du regard de l'utilisateur sur la première interface. Dans les modes de réalisation de la présente demande, une focalisation automatique peut être effectuée selon un point d'attention d'un utilisateur, ce qui permet de réduire les opérations de l'utilisateur et d'améliorer l'expérience de l'utilisateur.
PCT/CN2022/072712 2021-06-25 2022-01-19 Procédé de focalisation et dispositif associé WO2022267464A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110713771.5A CN113572956A (zh) 2021-06-25 2021-06-25 一种对焦的方法及相关设备
CN202110713771.5 2021-06-25

Publications (1)

Publication Number Publication Date
WO2022267464A1 true WO2022267464A1 (fr) 2022-12-29

Family

ID=78162798

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/072712 WO2022267464A1 (fr) 2021-06-25 2022-01-19 Procédé de focalisation et dispositif associé

Country Status (2)

Country Link
CN (1) CN113572956A (fr)
WO (1) WO2022267464A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117097985A (zh) * 2023-10-11 2023-11-21 荣耀终端有限公司 对焦方法、电子设备及计算机可读存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113572956A (zh) * 2021-06-25 2021-10-29 荣耀终端有限公司 一种对焦的方法及相关设备
CN116074624B (zh) * 2022-07-22 2023-11-10 荣耀终端有限公司 一种对焦方法和装置
CN117472256A (zh) * 2023-12-26 2024-01-30 荣耀终端有限公司 图像处理方法及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905709A (zh) * 2012-12-25 2014-07-02 联想(北京)有限公司 一种控制电子设备的方法及电子设备
CN106303193A (zh) * 2015-05-25 2017-01-04 展讯通信(天津)有限公司 图像拍摄方法及装置
CN106331498A (zh) * 2016-09-13 2017-01-11 青岛海信移动通信技术股份有限公司 用于移动终端的图像处理方法及装置
CN110177210A (zh) * 2019-06-17 2019-08-27 Oppo广东移动通信有限公司 拍照方法及相关装置
CN111510626A (zh) * 2020-04-21 2020-08-07 Oppo广东移动通信有限公司 图像合成方法及相关装置
CN112261300A (zh) * 2020-10-22 2021-01-22 维沃移动通信(深圳)有限公司 对焦方法、装置及电子设备
CN113572956A (zh) * 2021-06-25 2021-10-29 荣耀终端有限公司 一种对焦的方法及相关设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905709A (zh) * 2012-12-25 2014-07-02 联想(北京)有限公司 一种控制电子设备的方法及电子设备
CN106303193A (zh) * 2015-05-25 2017-01-04 展讯通信(天津)有限公司 图像拍摄方法及装置
CN106331498A (zh) * 2016-09-13 2017-01-11 青岛海信移动通信技术股份有限公司 用于移动终端的图像处理方法及装置
CN110177210A (zh) * 2019-06-17 2019-08-27 Oppo广东移动通信有限公司 拍照方法及相关装置
CN111510626A (zh) * 2020-04-21 2020-08-07 Oppo广东移动通信有限公司 图像合成方法及相关装置
CN112261300A (zh) * 2020-10-22 2021-01-22 维沃移动通信(深圳)有限公司 对焦方法、装置及电子设备
CN113572956A (zh) * 2021-06-25 2021-10-29 荣耀终端有限公司 一种对焦的方法及相关设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117097985A (zh) * 2023-10-11 2023-11-21 荣耀终端有限公司 对焦方法、电子设备及计算机可读存储介质
CN117097985B (zh) * 2023-10-11 2024-04-02 荣耀终端有限公司 对焦方法、电子设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN113572956A (zh) 2021-10-29

Similar Documents

Publication Publication Date Title
EP3866458B1 (fr) Procédé et dispositif de capture d'images
EP3937480A1 (fr) Procédé et dispositif d'enregistrement de vidéo multi-pistes
CN110035141B (zh) 一种拍摄方法及设备
WO2021136050A1 (fr) Procédé de photographie d'image et appareil associé
WO2022267464A1 (fr) Procédé de focalisation et dispositif associé
WO2021129198A1 (fr) Procédé de photographie dans un scénario à longue distance focale, et terminal
CN112492193B (zh) 一种回调流的处理方法及设备
CN113475057A (zh) 一种录像帧率的控制方法及相关装置
US11272116B2 (en) Photographing method and electronic device
WO2023273323A9 (fr) Procédé de mise au point et dispositif électronique
WO2020015149A1 (fr) Procédé de détection de ride et dispositif électronique
WO2023131070A1 (fr) Procédé de gestion de dispositif électronique, dispositif électronique et support de stockage lisible
CN113467735A (zh) 图像调整方法、电子设备及存储介质
CN113572957B (zh) 一种拍摄对焦方法及相关设备
WO2022068505A1 (fr) Procédé de prise de photographies et dispositif électronique
WO2022033344A1 (fr) Procédé de stabilisation vidéo, dispositif de terminal et support de stockage lisible par ordinateur
CN114302063B (zh) 一种拍摄方法及设备
WO2022105670A1 (fr) Procédé d'affichage et terminal
WO2023071497A1 (fr) Procédé de réglage de paramètre de photographie, dispositif électronique et support de stockage
WO2021197014A1 (fr) Procédé et appareil de transmission d'image
CN116055872B (zh) 图像获取方法、电子设备和计算机可读存储介质
CN116437194B (zh) 显示预览图像的方法、装置及可读存储介质
RU2789447C1 (ru) Способ и устройство многоканальной видеозаписи
WO2021063156A1 (fr) Procédé pour ajuster un angle de pliage d'un dispositif électronique, et dispositif électronique
CN114691066A (zh) 一种应用的显示方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22826973

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE