WO2021063155A1 - 人眼区域定位方法、相关装置 - Google Patents

人眼区域定位方法、相关装置 Download PDF

Info

Publication number
WO2021063155A1
WO2021063155A1 PCT/CN2020/113541 CN2020113541W WO2021063155A1 WO 2021063155 A1 WO2021063155 A1 WO 2021063155A1 CN 2020113541 W CN2020113541 W CN 2020113541W WO 2021063155 A1 WO2021063155 A1 WO 2021063155A1
Authority
WO
WIPO (PCT)
Prior art keywords
original image
human eye
area
target
determining
Prior art date
Application number
PCT/CN2020/113541
Other languages
English (en)
French (fr)
Inventor
韩世广
方攀
陈岩
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021063155A1 publication Critical patent/WO2021063155A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Definitions

  • This application relates to the field of electronic technology, and in particular to a method for locating a human eye area and related devices.
  • some electronic devices on the market support the human eye area positioning function, using eye tracking technology to realize the user's control of the electronic device, and the eye tracking technology is based on realizing the human eye area positioning.
  • the electronic device locates the human eye area, it usually locates the human face area first, and then reduces the face area to the human face interest area.
  • Such a human eye area positioning method has a narrow application range and the eye area positioning The efficiency is low.
  • the embodiments of the present application provide a human eye region positioning method and related devices, in order to expand the mechanism of human eye region positioning and improve the efficiency of human eye positioning.
  • an embodiment of the present application provides a method for locating a human eye area, which is applied to an electronic device, and the method includes:
  • the original image being an image collected by the electronic device in a state where the electronic device irradiates a target user wearing glasses with an infrared IR lamp;
  • the human eye area in the original image is determined according to the target light spot.
  • an embodiment of the present application provides a human eye region positioning device, which is applied to an electronic device, and includes a processing unit and a communication unit, where:
  • the processing unit is configured to obtain an original image through the communication unit, where the original image is an image collected by the electronic device through an infrared IR lamp irradiating a target user wearing glasses; and used to determine the original image And used to determine the two spots with the largest area and the second largest area among the at least two spots as the target spot; and used to determine the human eye area in the original image according to the target spot.
  • an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and are configured to be processed by the above
  • the above-mentioned program includes instructions for executing the steps in any method of the first aspect of the embodiments of the present application.
  • an embodiment of the present application provides a computer-readable storage medium, wherein the above-mentioned computer-readable storage medium stores a computer program for electronic data exchange, wherein the above-mentioned computer program enables a computer to execute Part or all of the steps described in any method of the two aspects.
  • the embodiments of the present application provide a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute as implemented in this application.
  • the computer program product may be a software installation package.
  • the electronic device first obtains an original image, the original image is the image collected by the electronic device in the state where the electronic device irradiates the target user wearing glasses with an infrared IR lamp, and secondly, the original image is determined Then, the two spots with the largest area and the second largest among the at least two spots are determined as target spots, and finally, the human eye area in the original image is determined according to the target spots.
  • the electronic device of the embodiment of the present application can irradiate the target user wearing glasses through the infrared IR lamp, and collect the original image of the electronic device in the state of irradiating the target user wearing the glasses through the infrared IR lamp, which is determined by locating the light spot in the original image
  • the human eye area expands the mechanism of human eye area positioning and improves the efficiency of human eye positioning.
  • FIG. 1A is a schematic flowchart of a method for locating a human eye area according to an embodiment of the present application
  • FIG. 1B is a schematic diagram of multiple light spots in the original image
  • FIG. 1C is a schematic diagram of determining the human eye area in the original image with the midpoint of two reference points as the geometric center;
  • FIG. 1D is a schematic diagram of determining the human eye area in the original image with two reference points as the center points;
  • FIG. 2 is a schematic flowchart of another method for locating a human eye area provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of yet another method for locating a human eye area according to an embodiment of the present application
  • FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Fig. 5 is a block diagram of functional modules of a device for locating a human eye area provided by an embodiment of the present application.
  • the electronic devices involved in the embodiments of the application may be electronic devices with communication capabilities.
  • the electronic devices may include various handheld devices with wireless communication functions, vehicle-mounted devices, wearable devices, computing devices, or other devices connected to wireless modems.
  • Processing equipment as well as various forms of user equipment (User Equipment, UE), mobile station (Mobile Station, MS), terminal equipment (terminal device), and so on.
  • UE User Equipment
  • MS Mobile Station
  • terminal device terminal device
  • the electronic device when the electronic device locates the human eye area, it usually locates the human face area first, and then reduces the face area to the human face interest area.
  • Such a human eye area positioning method has a narrow application range and the eye area positioning The efficiency is low.
  • an embodiment of the present application proposes a method for locating a human eye area.
  • FIG. 1A is a schematic flowchart of a method for locating a human eye area provided by an embodiment of the present application, which is applied to an electronic device. As shown in FIG. 1A, the method for locating a human eye area includes:
  • An electronic device acquires an original image, where the original image is an image collected by the electronic device in a state where the electronic device irradiates a target user wearing glasses with an infrared IR lamp.
  • the original image includes the face image of the target user wearing glasses and a series of light spots generated by the irradiation of infrared IR lamps.
  • the infrared lamp is a lamp whose main component of radiation is in the infrared spectrum range. .
  • the method for acquiring the original image by the electronic device may be: the electronic device acquires a plurality of first images in a state where the target user wearing glasses is irradiated by an infrared IR lamp; the electronic device determines that among the plurality of first images M second images with open eyes of the target user, m is a positive integer greater than 1; the electronic device determines that there are n third images in the m second images where no user’s hand blocks the glasses, n is a positive integer greater than 1 and less than m; the electronic device determines that any third image of the user in the n third images is the original image.
  • the purpose of realizing human eye region positioning is to further realize eye tracking. Therefore, the original image is an image in which the eyes of the target user are open.
  • the electronic device can irradiate the target user wearing glasses through an infrared IR lamp, and obtain an image of the electronic device in a state where the electronic device irradiates the target user wearing glasses through the infrared IR lamp, so as to improve the reliability of human eye region positioning.
  • S102 The electronic device determines at least two light spots in the original image.
  • At least one spot in the original image is a spot generated by an infrared IR lamp irradiating a target user wearing glasses.
  • an electronic device irradiates a target user wearing glasses through an infrared IR lamp
  • multiple spots may be generated.
  • a lens flare is formed on each of the two lenses of the glasses worn by the target user.
  • a frame flare may also be formed on the frame of the glasses worn by the target user, and when the user's eyes are open Under the illumination of the IR lamp, pupil spots will also appear on the user’s eyes.
  • the at least one spot may also include other spots, which are not specifically limited.
  • the spots in the image can be used as image features of the image.
  • the at least two light spots in the original image are essentially features of the preliminarily determined image.
  • Figure 1B is a schematic diagram of multiple spots in the original image.
  • the original image includes two pupil spots, two lens spots, and three lens frame spots. Among them, the two lens spots have the largest area.
  • the electronic device can determine at least one spot in the original image.
  • S103 The electronic device determines that the two spots with the largest area and the second largest among the at least two spots are target spots.
  • the two light spots with the largest area and the second largest area are the light spots formed on each of the two lenses of the glasses.
  • the electronic device can determine the two spots with the largest area and the second largest area in at least one spot in the original image as the target spot.
  • S104 The electronic device determines the human eye area in the original image according to the target light spot.
  • the target light spot is a light spot formed on each of the two lenses of the glasses. Because of the distance between the user and the electronic device and the user's posture, the light spot formed on each of the two lenses of the glasses The position of the target user may change, but all fall on the lens of the glasses, and the relative position of the glasses and the user’s eyes will not change much. Therefore, in practical applications, the target user can wear two glasses The light spot formed on each lens in the lens is used as the target light spot to determine the human eye area.
  • the electronic device can illuminate the target user wearing glasses through the infrared IR lamp, and collect the original image of the electronic device illuminating the target user wearing the glasses through the infrared IR lamp, and determine the person by locating the light spot in the original image.
  • the eye area expands the mechanism of human eye area positioning and improves the efficiency of human eye positioning.
  • the electronic device first obtains an original image, the original image is the image collected by the electronic device in the state where the electronic device irradiates the target user wearing glasses with an infrared IR lamp, and secondly, the original image is determined Then, the two spots with the largest area and the second largest among the at least two spots are determined as target spots, and finally, the human eye area in the original image is determined according to the target spots.
  • the electronic device of the embodiment of the present application can irradiate the target user wearing glasses through the infrared IR lamp, and collect the original image of the electronic device in the state of irradiating the target user wearing the glasses through the infrared IR lamp, which is determined by locating the light spot in the original image
  • the human eye area expands the mechanism of human eye area positioning and improves the efficiency of human eye positioning.
  • the electronic device determining the human eye area in the original image according to the target light spot includes: when the electronic device detects that the target user wears glasses skewed, determining the target The skewness of the glasses worn by the user; if the skewness is greater than or equal to the preset skewness, the target spot is corrected according to the skewness; the electronic device determines the target spot according to the corrected target spot The human eye area in the original image.
  • the skew means that the user wears the glasses and the glasses belt is skewed
  • the method of correcting the target light spot according to the skewness may be: the electronic device is skewed along the glasses To correct the target spot.
  • the preset skewness may be an angle, such as 5 degrees.
  • the center of the glasses frame is taken as the rotation. In the center, the two target spots are rotated 6 degrees along the offset direction.
  • the electronic device can correct the target light spot according to the condition of the user wearing glasses, so that the positioning of the human eye area is more accurate.
  • the electronic device determining the human eye area in the original image according to the corrected target spot includes: the electronic device determines the reference of each spot in the corrected target spot Point to obtain two reference points of the original image; the electronic device determines a human eye area in the original image according to the two reference points, and the human eye area includes at least one area with a specific shape.
  • the human eye area may be an area with a specific shape
  • the specific shape area includes the two reference points and the human eye
  • the specific shape may be a circle, an ellipse, a square, a rectangle
  • the specific shape may be irregular or irregular, and is not specifically limited.
  • the human eye area may also be two areas with a specific shape, and the two areas with a specific shape respectively include one of the two reference points, wherein the two areas with a specific shape
  • the size and shape of the can be the same or different.
  • the specific shape can be any one of regular images such as a circle, an ellipse, a square, a rectangle, a trapezoid, a diamond, and a polygon.
  • the specific shape can be an irregular shape.
  • the human eye area can be a rectangle and an ellipse.
  • the electronic device can determine two reference points according to the corrected target light spot, and then according to the human eye area in the original image of the two reference points, the intelligence of the human eye area positioning is improved.
  • the electronic device determines the reference point of each spot in the target spot after correction, and obtains two reference points of the original image, including: the electronic device obtains the corrected reference point Position information corresponding to at least one pixel of each spot in the target spot, where the position information includes an abscissa and an ordinate; the electronic device determines the average value of the abscissa corresponding to the at least one pixel; the electronic device Determine the average value of the ordinate corresponding to the at least one pixel; the electronic device determines the reference point of each spot in the corrected target spot according to the average value of the abscissa and the average value of the ordinate , Get the two reference points of the original image.
  • the electronic device determines the reference point of each spot in the target spot after correction according to the average value of the abscissa and the average value of the ordinate, and obtains the details of the two reference points of the original image.
  • the implementation method is: the electronic device uses the average value of the abscissa of each spot in the target spot after correction as the abscissa of the corresponding reference point, and the average value of the ordinate as the ordinate of the corresponding reference point, to obtain the The two reference points of the original image.
  • the electronic device can obtain the two reference points of the original image according to the position of the pixel point corresponding to each spot in the target spot after correction, which improves the intelligence of human eye region positioning.
  • the electronic device determining the human eye area in the original image according to the target light spot includes: when the electronic device detects that the target user wears glasses that are not skewed, or, When it is detected that the target user wears glasses skew, and the skewness of the target user wears glasses is less than the preset skewness, determine the reference point of each of the two spots with the largest area and the second largest area to obtain Two reference points of the original image; the electronic device determines a human eye area in the original image according to the two reference points, and the human eye area includes at least one area with a specific shape.
  • determining the reference point of each of the two light spots with the largest area and the second largest area, the principle of obtaining the two reference points of the original image, and determining the reference point of each light spot in the target light spot after correction The principle of obtaining the two reference points of the original image is the same.
  • the corresponding target light spot is not corrected or does not need to be corrected, please refer to the foregoing content, and will not be repeated here.
  • the skewness of the target user wearing glasses may be greater than the preset skewness. Therefore, in this case, the corresponding eye area may need to be expanded to make the target user’s Eyes are included. Therefore, when the determination strategy is to directly determine the human eye area based on the target light spot without any correction to the original image, it can be achieved by expanding the delineation range of the human eye area.
  • the electronic device can determine the reference point of each of the two light spots with the largest area and the second largest area in the original image, and obtain two reference points of the original image to determine the human eye area and improve the human eye area positioning Reliability.
  • that the electronic device determines the human eye area in the original image according to the two reference points includes: the electronic device determines the midpoint of the two reference points; the electronic device Determine the human eye area in the original image with the midpoint as the geometric center.
  • the electronic device using the midpoint as the geometric center to determine the human eye area in the original image may be that the electronic device uses the midpoint as the geometric center to determine a rectangular area in the original image as a human eye Area; the electronic device uses the midpoint as the geometric center to determine the human eye area in the original image, or the electronic device uses the midpoint as the geometric center to determine a parallelogram area in the original image Is the human eye area; the electronic device uses the midpoint as the geometric center to determine the human eye area in the original image, or the electronic device uses the midpoint as the geometric center to determine an ellipse in the original image
  • the shape area is the human eye area; the electronic device uses the midpoint as the geometric center to determine the human eye area in the original image, or the electronic device uses the midpoint as the geometric center to determine the human eye area in the original image.
  • An irregular area is the human eye area; etc., there is no specific limitation.
  • Figure 1C is a schematic diagram of determining the human eye area in the original image with the midpoint of two reference points as the geometric center. As shown in Figure 1C, in the figure, points a and b are reference points. Point, point c is the midpoint between point a and point b, and the human eye area is an elliptical area defined with point c as the center point.
  • the electronic device can determine the human eye area by using the midpoint of the confirmed reference point as the set center.
  • the electronic device determining the human eye area in the original image according to the two reference points includes: the electronic device uses the first of the two reference points as the first reference point.
  • the center point determines the first area;
  • the electronic device determines the second area by using the second reference point of the two reference points as the second center point, and the first area and the second area are human eye areas.
  • the shape and size of the first area and the second area may be the same.
  • the first area and the second area are circular with the same size, and the shape and size of the first area and the second area may not be exactly the same, such as the first area and the second area.
  • the area and the second area are circles with different sizes, the first area is a circle and the second area is an ellipse, the first area is an ellipse and the second area is a circle, and so on, which are not specifically limited.
  • Figure 1D is a schematic diagram of determining the human eye area in the original image with two reference points as the center points, as shown in Figure 1D, in the figure, points a and b are the original image
  • the reference point, the human eye area is a circular area centered on point a and a circular area centered on point b.
  • the electronic device can obtain two regions centered on two reference points, and the two regions are the human eye regions.
  • determining at least two light spots in the original image by the electronic device includes: acquiring, by the electronic device, a gray value corresponding to each of the multiple pixels in the original image; The electronic device determines a plurality of target pixel points of the plurality of pixel points whose grayscale value is greater than a preset grayscale threshold; the electronic device determines at least two of the original image according to the plurality of target pixel points Light spot.
  • the grayscale value corresponding to each pixel point in the multiple pixels of the original image is the pixel value of each pixel point.
  • the gray value corresponding to each of the multiple pixels of the original image can be obtained in the following manner: the electronic device determines that among the multiple pixels of the original image The R, G, and B values corresponding to the RGB three channels of each pixel; the electronic device determines the RGB three channels of the multiple pixels according to the R, G, and B values corresponding to the RGB three channels of each pixel The corresponding gray value.
  • any color is composed of the three primary colors of red, green, and blue.
  • the electronic device corresponds to the R, G, and B channels of each pixel.
  • the preset gray scale threshold may be any value between 200 and 255, which is not specifically limited.
  • the electronic device can determine at least one light spot in the original image by the gray value corresponding to each pixel point in the multiple pixels of the original image, which improves the intelligence of human eye region positioning.
  • FIG. 2 is a schematic flow diagram of another method for locating a human eye area provided by an embodiment of the present application. As shown in the figure, the method for locating a human eye area includes:
  • the electronic device acquires an original image, where the original image is an image collected by the electronic device in a state where the electronic device irradiates a target user wearing glasses with an infrared IR lamp;
  • S202 The electronic device acquires a gray value corresponding to each pixel among a plurality of pixels of the original image
  • S203 The electronic device determines a plurality of target pixels with gray values greater than a preset gray threshold among the plurality of pixels;
  • S204 The electronic device determines at least two light spots in the original image according to the multiple target pixels;
  • S205 The electronic device determines that the two spots with the largest area and the second largest among the at least two spots are target spots;
  • S206 The electronic device determines the human eye area in the original image according to the target light spot.
  • the electronic device first obtains an original image, the original image is the image collected by the electronic device in the state where the electronic device irradiates the target user wearing glasses with an infrared IR lamp, and secondly, the original image is determined Then, the two spots with the largest area and the second largest among the at least two spots are determined as target spots, and finally, the human eye area in the original image is determined according to the target spots.
  • the electronic device of the embodiment of the present application can irradiate the target user wearing glasses through the infrared IR lamp, and collect the original image of the electronic device in the state of irradiating the target user wearing the glasses through the infrared IR lamp, which is determined by locating the light spot in the original image
  • the human eye area expands the mechanism of human eye area positioning and improves the efficiency of human eye positioning.
  • FIG. 3 is a schematic flow diagram of another method for locating a human eye area provided by an embodiment of the present application. As shown in the figure, the method for locating a human eye area includes:
  • An electronic device acquires an original image, where the original image is an image collected by the electronic device in a state where the electronic device irradiates a target user wearing glasses with an infrared IR lamp;
  • S302 The electronic device determines at least two light spots in the original image
  • S303 The electronic device determines that the two spots with the largest area and the second largest among the at least two spots are target spots;
  • S306 The electronic device determines the human eye area in the original image according to the corrected target spot
  • the electronic device determines a human eye area in the original image according to the two reference points, where the human eye area includes at least one area with a specific shape.
  • the electronic device first obtains an original image, the original image is the image collected by the electronic device in the state where the electronic device irradiates the target user wearing glasses with an infrared IR lamp, and secondly, the original image is determined Then, the two spots with the largest area and the second largest among the at least two spots are determined as target spots, and finally, the human eye area in the original image is determined according to the target spots.
  • the electronic device of the embodiment of the present application can irradiate the target user wearing glasses through the infrared IR lamp, and collect the original image of the electronic device in the state of irradiating the target user wearing the glasses through the infrared IR lamp, which is determined by locating the light spot in the original image
  • the human eye area expands the mechanism of human eye area positioning and improves the efficiency of human eye positioning.
  • the electronic device can determine two reference points according to the corrected target light spot, and then according to the human eye area in the original image of the two reference points, the intelligence of the human eye area positioning is improved.
  • FIG. 4 is a schematic structural diagram of an electronic device 400 provided by an embodiment of the present application.
  • the electronic device 400 includes an application processor 410, a memory 420, a communication interface 430, and one or more programs 421, where the one or more programs 421 are stored in the aforementioned memory 420 and are configured to be executed by the aforementioned application processor 410,
  • the one or more programs 421 include instructions for performing the following steps;
  • the original image being an image collected by the electronic device in a state where the electronic device irradiates a target user wearing glasses with an infrared IR lamp;
  • the human eye area in the original image is determined according to the target light spot.
  • the electronic device first acquires an original image, which is the image collected by the electronic device under the state that the electronic device irradiates the target user wearing glasses with an infrared IR lamp, and secondly, the original image is determined Then, the two spots with the largest area and the second largest among the at least two spots are determined as target spots, and finally, the human eye area in the original image is determined according to the target spots.
  • an original image which is the image collected by the electronic device under the state that the electronic device irradiates the target user wearing glasses with an infrared IR lamp
  • the original image is determined
  • the two spots with the largest area and the second largest among the at least two spots are determined as target spots
  • the human eye area in the original image is determined according to the target spots.
  • the electronic device of the embodiment of the present application can illuminate the target user wearing glasses through the infrared IR lamp, and collect the original image of the electronic device in the state of illuminating the target user wearing the glasses through the infrared IR lamp, which is determined by locating the light spot in the original image
  • the human eye area expands the mechanism of human eye area positioning and improves the efficiency of human eye positioning.
  • the instructions of the one or more programs 421 are specifically used to: When the glasses are skewed, determine the skewness of the glasses worn by the target user; if the skewness is greater than or equal to the preset skewness, correct the target light spot according to the skewness; according to the corrected target The light spot determines the human eye area in the original image.
  • the instructions of the one or more programs 421 are specifically used to: determine the corrected target spot.
  • the reference point of each light spot in the target light spot is obtained, and two reference points of the original image are obtained; the human eye area in the original image is determined according to the two reference points, and the human eye area includes at least one having a specific shape Area.
  • the instructions of the one or more programs 421 are specifically used To: obtain position information corresponding to at least one pixel point of each spot in the target spot after correction, where the position information includes an abscissa and an ordinate; determine an average value of the abscissa corresponding to the at least one pixel; Determine the average value of the ordinate corresponding to the at least one pixel point; determine the reference point of each spot in the target spot after correction according to the average value of the abscissa and the average value of the ordinate, to obtain the The two reference points of the original image.
  • the instructions of the one or more programs 421 are specifically used to: When the glasses are not skewed, or when it is detected that the target user wears the glasses skewed, and the skewness of the target user wears the glasses is less than the preset skewness, determine the two spots with the largest area and the second largest area
  • the reference point of each light spot obtains two reference points of the original image; the human eye area in the original image is determined according to the two reference points, and the human eye area includes at least one area with a specific shape.
  • the instructions of the one or more programs 421 are specifically used to: determine the two reference points The midpoint of; using the midpoint as the geometric center to determine the human eye area in the original image.
  • the instructions of the one or more programs 421 are specifically used to:
  • the first reference point is the first center point to determine the first area;
  • the second reference point of the two reference points is the second center point to determine the second area, where the first area and the second area are human eyes area.
  • the instructions of the one or more programs 421 are specifically used to: obtain each of the multiple pixels of the original image The gray value corresponding to the pixel; determining a plurality of target pixels with gray values greater than a preset gray threshold among the plurality of pixels; determining at least two of the original image according to the plurality of target pixels Light spot.
  • an electronic device includes hardware structures and/or software modules corresponding to each function.
  • this application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
  • the embodiment of the present application may divide the electronic device into functional modules according to the foregoing method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software function modules. It should be noted that the division of modules in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 5 is a block diagram of the functional module composition of the control device 500 involved in an embodiment of the present application.
  • the control device 500 is applied to electronic equipment, and includes a processing unit 501 and a communication unit 502, among which,
  • the processing unit 501 is configured to obtain an original image through the communication unit 502, where the original image is an image collected when the electronic device irradiates a target user wearing glasses with an infrared IR lamp; and is used to determine the At least two light spots in the original image; and used to determine the two light spots with the largest area and the second largest area among the at least two light spots as target light spots; and used to determine the human eyes in the original image according to the target light spots area.
  • control device 500 may further include a storage unit 503 for storing program codes and data of the electronic device.
  • the processing unit 501 may be a processor
  • the communication unit 502 may be a touch screen or a transceiver
  • the storage unit 503 may be a memory.
  • the electronic device first obtains an original image, the original image is the image collected by the electronic device in the state where the electronic device irradiates the target user wearing glasses with an infrared IR lamp, and secondly, the original image is determined Then, the two spots with the largest area and the second largest among the at least two spots are determined as target spots, and finally, the human eye area in the original image is determined according to the target spots.
  • the electronic device of the embodiment of the present application can irradiate the target user wearing glasses through the infrared IR lamp, and collect the original image of the electronic device in the state of irradiating the target user wearing the glasses through the infrared IR lamp, which is determined by locating the light spot in the original image
  • the human eye area expands the mechanism of human eye area positioning and improves the efficiency of human eye positioning.
  • the processing unit 501 is specifically configured to: when it is detected that the target user wears glasses skewed, determine The skewness of the target user wearing glasses; if the skewness is greater than or equal to the preset skewness, the target spot is corrected according to the skewness; the original spot is determined according to the corrected target spot The area of the human eye in the image.
  • the processing unit 501 is specifically configured to: determine each of the corrected target light spots The reference points of the light spot are obtained to obtain two reference points of the original image; the human eye area in the original image is determined according to the two reference points, and the human eye area includes at least one area with a specific shape.
  • the processing unit 501 is specifically configured to: Position information corresponding to at least one pixel of each spot in the target spot, where the position information includes an abscissa and an ordinate; determine the average value of the abscissa corresponding to the at least one pixel; determine the at least one The average value of the ordinate corresponding to the pixel point; the reference point of each spot in the target spot after correction is determined according to the average value of the abscissa and the average value of the ordinate, to obtain two of the original image Reference point.
  • the processing unit 501 is specifically configured to: when it is detected that the target user wears glasses that are not skewed, Or, when it is detected that the target user wears glasses skew, and the skewness of the target user wears glasses is less than a preset skewness, determine the reference for each of the two spots with the largest area and the second largest area Point to obtain two reference points of the original image; determine a human eye area in the original image according to the two reference points, and the human eye area includes at least one area with a specific shape.
  • the processing unit 501 is specifically configured to: determine the midpoint of the two reference points; The midpoint is the geometric center to determine the human eye area in the original image.
  • the processing unit 501 is specifically configured to: use the first reference point of the two reference points A first area is determined for the first center point; a second area is determined by using a second reference point of the two reference points as the second center point, and the first area and the second area are human eye areas.
  • the processing unit 501 is specifically configured to: obtain a gray scale corresponding to each of the multiple pixels in the original image. Degree value; determining a plurality of target pixel points of the plurality of pixel points whose gray value is greater than a preset gray threshold value; determining at least two light spots in the original image according to the plurality of target pixel points.
  • An embodiment of the present application also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any method as recorded in the above method embodiment ,
  • the above-mentioned computer includes electronic equipment.
  • the embodiments of the present application also provide a computer program product.
  • the above-mentioned computer program product includes a non-transitory computer-readable storage medium storing a computer program. Part or all of the steps of the method.
  • the computer program product may be a software installation package, and the above-mentioned computer includes electronic equipment.
  • the disclosed device may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the above-mentioned modules is only a logical function division.
  • there may be other division methods for example, multiple modules or components can be combined or integrated.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or modules, and may be in electrical or other forms.
  • modules described above as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in one place, or they may be distributed to multiple network modules. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software function modules.
  • the above integrated module is implemented in the form of a software function module and sold or used as an independent product, it can be stored in a computer readable memory.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a memory.
  • a number of instructions are included to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the foregoing methods of the various embodiments of the present application.
  • the aforementioned memory includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
  • the program can be stored in a computer-readable memory, and the memory can include: a flash disk , Read-only memory (English: Read-OnlyMemory, abbreviation: ROM), random access device (English: RandomAccessMemory, abbreviation: RAM), magnetic disk or CD, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

一种人眼区域定位方法、相关装置,方法应用于电子设备,方法包括:获取原始图像,原始图像为电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下采集到的图像,之后,确定原始图像中的至少一处光斑,然后,确定至少一处光斑中面积最大和次大的两个光斑为目标光斑,最后,根据目标光斑确定原始图像中的人眼区域。有利于提升人眼定位的效率。

Description

人眼区域定位方法、相关装置 技术领域
本申请涉及电子技术领域,具体涉及一种人眼区域定位方法、相关装置。
背景技术
随着科技的发展,目前,市面上的一些电子设备支持人眼区域定位功能,利用眼球追踪技术来实现用户对电子设备的操控,而眼球追踪技术的基础是实现人眼区域定位。现有技术中,电子设备在进行人眼区域定位时往往是先定位人脸区域,再缩小人脸区域至人脸兴趣区域,这样的人眼区域定位方式,适用范围窄,且人眼区域定位的效率低。
发明内容
本申请实施例提供了一种人眼区域定位方法、相关装置,以期拓展人眼区域定位的机制,提升人眼定位的效率。
第一方面,本申请实施例提供一种人眼区域定位方法,应用于电子设备,所述方法包括:
获取原始图像,所述原始图像为所述电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下采集到的图像;
确定所述原始图像中的至少两处光斑;
确定所述至少两处光斑中面积最大和次大的两个光斑为目标光斑;
根据所述目标光斑确定所述原始图像中的人眼区域。
第二方面,本申请实施例提供一种人眼区域定位装置,应用于电子设备,包括处理单元和通信单元,其中,
所述处理单元,用于通过所述通信单元获取原始图像,所述原始图像为所述电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下采集到的图像;以及用于确定所述原始图像中的至少两处光斑;以及用于确定所述至少两处光斑中面积最大和次大的两个光斑为目标光斑;以及用于根据所述目标光斑确定所述原始图像中的人眼区域。
第三方面,本申请实施例提供一种电子设备,包括处理器、存储器、通信接口以及一个或多个程序,其中,上述一个或多个程序被存储在上述存储器中,并且被配置由上述处理器执行,上述程序包括用于执行本申请实施例第一方面任一方法中的步骤的指令。
第四方面,本申请实施例提供了一种计算机可读存储介质,其中,上述计算机可读存储介质存储用于电子数据交换的计算机程序,其中,上述计算机程序使得计算机执行如本申请实施例第二方面任一方法中所描述的部分或全部步骤。
第五方面,本申请实施例提供了一种计算机程序产品,其中,上述计算 机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本申请实施例第二方面任一方法中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
可以看出,本申请实施例中,电子设备首先获取原始图像,所述原始图像为所述电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下采集到的图像,其次,确定所述原始图像中的至少两处光斑,然后,确定所述至少两处光斑中面积最大和次大的两个光斑为目标光斑,最后,根据所述目标光斑确定所述原始图像中的人眼区域。可见,本申请实施例的电子设备能够通过红外线IR灯照射佩戴眼镜的目标用户,并采集到电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下的原始图像,通过定位原始图像中的光斑确定人眼区域,拓展了人眼区域定位的机制,提升了人眼定位的效率。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A是本申请实施例提供的一种人眼区域定位方法的流程示意图;
图1B为原始图像中的多处光斑的示意图;
图1C为以两个参考点的中点为几何中心确定原始图像中的人眼区域的示意图;
图1D为分别以两个参考点为中心点确定原始图像中的人眼区域的示意图;
图2是本申请实施例提供的另一种人眼区域定位方法的流程示意图;
图3是本申请实施例提供的再一种人眼区域定位方法的流程示意图;
图4是本申请实施例提供的一种电子设备的结构示意图;
图5是本申请实施例提供的一种人眼区域定位装置的功能模块组成框图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元 的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
本申请实施例所涉及到的电子设备可以是具备通信能力的电子设备,该电子设备可以包括各种具有无线通信功能的手持设备、车载设备、可穿戴设备、计算设备或连接到无线调制解调器的其他处理设备,以及各种形式的用户设备(UserEquipment,UE),移动台(MobileStation,MS),终端设备(terminaldevice)等等。
现有技术中,电子设备在进行人眼区域定位时往往是先定位人脸区域,再缩小人脸区域至人脸兴趣区域,这样的人眼区域定位方式,适用范围窄,且人眼区域定位的效率低。
基于上述问题,本申请实施例提出一种人眼区域定位方法,下面结合附图对本申请实施例进行详细介绍。
请参阅图1A,图1A是本申请实施例提供了一种人眼区域定位方法的流程示意图,应用于电子设备,如图1A所示,本人眼区域定位方法包括:
S101,电子设备获取原始图像,所述原始图像为所述电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下采集到的图像。
其中,所述原始图像中包括佩戴眼镜的目标用户的人脸图像以及通过红外线IR灯照射产生的一系列光斑,其中,红外线IR灯(infrared lamp),辐射的主要成分在红外光谱范围内的灯。
其中,所述电子设备获取原始图像的实现方式可以是:电子设备获取通过红外线IR灯照射佩戴眼镜的目标用户状态下的多张第一图像;所述电子设备确定所述多张第一图像中目标用户的眼睛处于睁开状态的m张第二图像,m为大于1的正整数;所述电子设备确定所述m张第二图像中无用户的手部遮挡眼镜的n张第三图像,n为大于1小于m的正整数;所述电子设备确定所述n张第三图像中用户的任意一张第三图像为所述原始图像。
需要说明的是,实现人眼区域定位的目的是为了进一步实现眼球追踪,因此,原始图像为目标用户的眼睛处于睁开状态的图像。
可见,本示例中,电子设备能够通过红外线IR灯照射佩戴眼镜的目标用户,并获取所述电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下的图像,提升人眼区域定位的可靠性。
S102,所述电子设备确定所述原始图像中的至少两处光斑。
其中,所述原始图像中的至少一处光斑为红外线IR灯照射佩戴眼镜的目标用户所产生的光斑,当电子设备通过红外线IR灯照射佩戴眼镜的目标用户时,可能会产生多个光斑,首先是在目标用户佩戴的眼镜的两个镜片中的每一个镜片上会形成镜片光斑,其次,在目标用户佩戴的眼镜的镜框上也可能会形成镜框光斑,且当用户的眼睛处于睁开状态时,在IR灯的照射下,用户的眼睛处也会出现瞳孔光斑,所述至少一处光斑可能还包括其他光斑,不作具体限定,而图像中的光斑可以作为图像的图像特征,所述确定所述原始图像中的至少两处光斑本质上为初步确定图像的特征。
举例来说,请参考图1B,图1B为原始图像中的多处光斑的示意图,如图1B所示,所述原始图像中包括两处瞳孔光斑,两处镜片光斑,以及三处镜框光斑,其中,两处镜片光斑的面积最大。
可见,本示例中,电子设备能够确定原始图像中的至少一处光斑。
S103,所述电子设备确定所述至少两处光斑中面积最大和次大的两个光斑为目标光斑。
其中,IR灯照射佩戴眼镜的目标用户时,所述面积最大和次大的两个光斑为在眼镜的两个镜片中的每一个镜片上形成的光斑。
可见,本示例中,电子设备能够确定原始图像中的至少一处光斑中面积最大和次大的两个光斑为目标光斑。
S104,所述电子设备根据所述目标光斑确定所述原始图像中的人眼区域。
其中,所述目标光斑为在眼镜的两个镜片中的每一个镜片上形成的光斑,因为用户距离电子设备的距离以及用户的姿势,该眼镜的两个镜片中的每一个镜片上形成的光斑的位置可能发生改变,但是,都落在眼镜的镜片上,而眼镜与用户的眼睛的相对位置不会发生太大改变,因此,在实际应用中,可将该目标用户佩戴的眼镜的两个镜片中的每一个镜片上会形成的光斑作为目标光斑,以确定人眼区域。
可见,本示例中,电子设备能够通过红外线IR灯照射佩戴眼镜的目标用户,并采集到电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下的原始图像,通过定位原始图像中的光斑确定人眼区域,拓展了人眼区域定位的机制,提升了人眼定位的效率。
可以看出,本申请实施例中,电子设备首先获取原始图像,所述原始图像为所述电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下采集到的图像,其次,确定所述原始图像中的至少两处光斑,然后,确定所述至少两处光斑中面积最大和次大的两个光斑为目标光斑,最后,根据所述目标光斑确定所述原始图像中的人眼区域。可见,本申请实施例的电子设备能够通过红外线IR灯照射佩戴眼镜的目标用户,并采集到电子设备通过红外线IR灯 照射佩戴眼镜的目标用户状态下的原始图像,通过定位原始图像中的光斑确定人眼区域,拓展了人眼区域定位的机制,提升了人眼定位的效率。
在一个可能的示例中,所述电子设备根据所述目标光斑确定所述原始图像中的人眼区域,包括:所述电子设备在检测到所述目标用户配戴眼镜歪斜时,确定所述目标用户配戴眼镜的歪斜度;若所述歪斜度大于或者等于预设歪斜度,则根据所述歪斜度对所述目标光斑进行校正;所述电子设备根据校正后的所述目标光斑确定所述原始图像中的人眼区域。
其中,所述歪斜是指所述用户佩戴眼镜时,将眼镜带歪了,所述根据所述歪斜度对所述目标光斑进行校正的实现方式可以是:所述电子设备沿着所述眼镜歪斜的方向对所述目标光斑进行校正。
举例来说,所述预设歪斜度可以是一个角度,如5度,当测量处所述原始图像中的所述歪斜度为6度时,大于5度,则以眼镜的镜架中心为旋转中心,将所述两个目标光斑沿着偏移的方向旋转6度。
可见,本示例中,电子设备能够根据用户佩戴眼镜的情况对目标光斑进行校正,使得人眼区域定位更加准确。
在一个可能的示例中,所述电子设备根据校正后的所述目标光斑确定所述原始图像中的人眼区域,包括:所述电子设备确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点;所述电子设备根据所述两个参考点确定所述原始图像中的人眼区域,所述人眼区域包括至少一个具有特定形状的区域。
其中,所述人眼区域可以是一个具有特定形状的区域,该具有特定形状的区域包含所述两个参考点以及人眼,其中,该特定形状可以是圆形、椭圆形、方形、矩形、梯形、菱形以及多边形等规则图像中的任意一种,该特定形状可以是也可以是不规则形状,不作具体限定。
其中,所述人眼区域也可以是两个具有特定形状的区域,该两个具有特定形状的区域分别包含所述两个参考点中的一个参考点,其中,该两个具有特定形状的区域的大小何形状可以相同也可以不同,该特定形状可以是圆形、椭圆形、方形、矩形、梯形、菱形以及多边形等规则图像中的任意一种,该特定形状可以是也可以是不规则形状,不作具体限定,如所述人眼区域可以是一个矩形和一个椭圆。
可见,本示例中,电子设备能够根据校正后的所述目标光斑确定两个参考点,进而根据两个参考点原始图像中的人眼区域,提高人眼区域定位的智能性。
在一个可能的示例中,所述电子设备确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点,包括:所述电子设备获取校正后的所述目标光斑中每个光斑的至少一个像素点对应的位置信息,所述 位置信息包括横坐标和纵坐标;所述电子设备确定所述至少一个像素点对应的横坐标的平均值;所述电子设备确定所述至少一个像素点对应的纵坐标的平均值;所述电子设备根据所述横坐标的平均值以及所述纵坐标的平均值确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点。
其中,所述电子设备根据所述横坐标的平均值以及所述纵坐标的平均值确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点的具体实现方式为:所述电子设备将校正后的所述目标光斑中每个光斑的横坐标的平均值作为对应参考点的横坐标,纵坐标的平均值作为对应参考点的纵坐标,得到所述原始图像的两个参考点。
可见,本示例中,电子设备能够根据通过校正后的所述目标光斑中每个光斑对应的像素点的位置,得到所述原始图像的两个参考点,提高人眼区域定位的智能性。
在一个可能的示例中,所述电子设备根据所述目标光斑确定所述原始图像中的人眼区域,包括:所述电子设备在检测到所述目标用户配戴眼镜未歪斜时,或者,在检测到所述目标用户配戴眼镜歪斜,且所述目标用户配戴眼镜的歪斜度小于预设歪斜度时,确定所述面积最大和次大的两个光斑中每个光斑的参考点,得到所述原始图像的两个参考点;所述电子设备根据所述两个参考点确定所述原始图像中的人眼区域,所述人眼区域包括至少一个具有特定形状的区域。
其中,确定所述面积最大和次大的两个光斑中每个光斑的参考点,得到所述原始图像的两个参考点的原理和确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点的原理相同,此处,对应目标光斑未进行校正或者不需要进行校正的情况,请参照前述内容,此处不再赘述。
需要说明的时,若目标光斑不进行校正时,可能存在目标用户配戴眼镜的歪斜度大于预设歪斜度,因此,此种情形下对应的人眼区域可能需要扩大范围,才能使目标用户的眼睛包含在内,因此,在确定策略为不对原始图像进行任何校正,直接根据目标光斑确定人眼区域时,可通过扩大人眼区域的圈定范围实现。
可见,本示例中,电子设备能够确定原始图像中面积最大和次大的两个光斑中每个光斑的参考点,得到原始图像的两个参考点,以确定人眼区域,提高人眼区域定位的可靠性。
在一个可能的示例中,所述电子设备根据所述两个参考点确定所述原始图像中的人眼区域,包括:所述电子设备确定所述两个参考点的中点;所述电子设备以所述中点为几何中心确定所述原始图像中的人眼区域。
其中,所述电子设备以所述中点为几何中心确定所述原始图像中的人眼 区域可以是所述电子设备以所述中点为几何中心确定所述原始图像中的一个矩形区域为人眼区域;所述电子设备以所述中点为几何中心确定所述原始图像中的人眼区域也可以是所述电子设备以所述中点为几何中心确定所述原始图像中的一个平行四边形区域为人眼区域;所述电子设备以所述中点为几何中心确定所述原始图像中的人眼区域还可以是所述电子设备以所述中点为几何中心确定所述原始图像中的一个椭圆形区域为人眼区域;所述电子设备以所述中点为几何中心确定所述原始图像中的人眼区域还可以是所述电子设备以所述中点为几何中心确定所述原始图像中的一个不规则的区域为人眼区域;等等,不作具体限定。
举例来说,请参考图1C,图1C为以两个参考点的中点为几何中心确定原始图像中的人眼区域的示意图,如图1C所示,图中,点a和点b为参考点,点c为点a和点b的中点,人眼区域为以点c为中心点确定的一个椭圆形区域。
可见,本示例中,电子设备能够通过确认的参考点的中点为集合中心确定人眼区域。
在一个可能的示例中,所述电子设备根据所述两个参考点确定所述原始图像中的人眼区域,包括:所述电子设备以所述两个参考点中第一参考点为第一中心点确定第一区域;所述电子设备以所述两个参考点中第二参考点为第二中心点确定第二区域,所述第一区域与所述第二区域为人眼区域。
其中,第一区域与第二区域的形状和大小可以相同,如第一区域与第二区域为大小相同的圆形,第一区域与第二区域的形状和大小可以不完全相同,如第一区域与第二区域为大小不相同的圆形,第一区域为圆形且第二区域为椭圆形,第一区域为椭圆形且第二区域为圆形,等等,不作具体限定。
举例来说,请参考图1D,图1D为分别以两个参考点为中心点确定原始图像中的人眼区域的示意图,如图1D所示,图中,a点和b点为原始图像的参考点,人眼区域为以a点为中心的圆形区域以及以b点为中心的圆形区域。
可见,本示例中,电子设备能够以两个参考点为中心得到两个区域,该两个区域即是人眼区域。
在一个可能的示例中,所述电子设备确定所述原始图像中的至少两处光斑,包括:所述电子设备获取所述原始图像的多个像素点中每个像素点对应的灰度值;所述电子设备确定所述多个像素点中灰度值大于预设灰度阈值的多个目标像素点;所述电子设备根据所述多个目标像素点确定所述原始图像中的至少两处光斑。
其中,若原始图像为黑白图像,即是灰度图像,则所述原始图像的多个像素点中每个像素点对应的灰度值就是所述每个像素点的像素值。
其中,若原始图像为彩色图像,则所述原始图像的多个像素点中每个像素点对应的灰度值可以通过以下方式获取:所述电子设备确定所述原始图像 的多个像素点中每个像素点的RGB三通道对应的R、G、B的值;所述电子设备根据每个像素点的RGB三通道对应的R、G、B的值确定所述多个像素点RGB三通道对应的灰度值。
具体的,任何颜色都有红、绿、蓝三原色组成,假如原来某点的颜色为RGB(R,G,B),那么,所述电子设备根据每个像素点的RGB三通道对应的R、G、B的值确定所述多个像素点RGB三通道对应的灰度值可以是Gray=R*0.3+G*0.59+B*0.11;所述电子设备根据每个像素点的RGB三通道对应的R、G、B的值确定所述多个像素点RGB三通道对应的灰度值也可以是Gray=(R*30+G*59+B*11)/100;所述电子设备根据每个像素点的RGB三通道对应的R、G、B的值确定所述多个像素点RGB三通道对应的灰度值还可以是Gray=(R*28+G*151+B*77)>>8;所述电子设备根据每个像素点的RGB三通道对应的R、G、B的值确定所述多个像素点RGB三通道对应的灰度值还可以是Gray=(R+G+B)/3;所述电子设备根据每个像素点的RGB三通道对应的R、G、B的值确定所述多个像素点RGB三通道对应的灰度值还可以是Gray=G,所述电子设备根据每个像素点的RGB三通道对应的R、G、B的值确定所述多个像素点RGB三通道对应的灰度值还可以是其他确定方式,其中,Gray即是灰度值。
其中,所述预设灰度阈值可以是255,即Gray=255时,像素点为目标像素点,所述预设灰度阈值还可以是254,即Gray=254时,像素点为目标像素点,所述预设灰度阈值可以是200到255之间的任一值,不作具体限定。
可见,本示例中,电子设备能够原始图像的多个像素点中每个像素点对应的灰度值确定原始图像中的至少一处光斑,提升人眼区域定位的智能性。
与上述图1A所示的实施例一致的,请参阅图2,图2是本申请实施例提供的又一种人眼区域定位方法的流程示意图,如图所示,本人眼区域定位方法包括:
S201,电子设备获取原始图像,所述原始图像为所述电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下采集到的图像;
S202,所述电子设备获取所述原始图像的多个像素点中每个像素点对应的灰度值;
S203,所述电子设备确定所述多个像素点中灰度值大于预设灰度阈值的多个目标像素点;
S204,所述电子设备根据所述多个目标像素点确定所述原始图像中的至少两处光斑;
S205,所述电子设备确定所述至少两处光斑中面积最大和次大的两个光斑为目标光斑;
S206,所述电子设备根据所述目标光斑确定所述原始图像中的人眼区域。
可以看出,本申请实施例中,电子设备首先获取原始图像,所述原始图像为所述电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下采集到的图像,其次,确定所述原始图像中的至少两处光斑,然后,确定所述至少两处光斑中面积最大和次大的两个光斑为目标光斑,最后,根据所述目标光斑确定所述原始图像中的人眼区域。可见,本申请实施例的电子设备能够通过红外线IR灯照射佩戴眼镜的目标用户,并采集到电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下的原始图像,通过定位原始图像中的光斑确定人眼区域,拓展了人眼区域定位的机制,提升了人眼定位的效率。
与上述图1A所示的实施例一致的,请参阅图3,图3是本申请实施例提供的再一种人眼区域定位方法的流程示意图,如图所示,本人眼区域定位方法包括:
S301,电子设备获取原始图像,所述原始图像为所述电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下采集到的图像;
S302,所述电子设备确定所述原始图像中的至少两处光斑;
S303,所述电子设备确定所述至少两处光斑中面积最大和次大的两个光斑为目标光斑;
S304,所述电子设备在检测到所述目标用户配戴眼镜歪斜时,确定所述目标用户配戴眼镜的歪斜度;
S305,若所述歪斜度大于或者等于预设歪斜度,则根据所述歪斜度对所述目标光斑进行校正;
S306,所述电子设备根据校正后的所述目标光斑确定所述原始图像中的人眼区域;
S307,所述电子设备在检测到所述目标用户配戴眼镜未歪斜时,或者,在检测到所述目标用户配戴眼镜歪斜,且所述目标用户配戴眼镜的歪斜度小于预设歪斜度时,确定所述面积最大和次大的两个光斑中每个光斑的参考点,得到所述原始图像的两个参考点;
S308,所述电子设备根据所述两个参考点确定所述原始图像中的人眼区域,所述人眼区域包括至少一个具有特定形状的区域。
可以看出,本申请实施例中,电子设备首先获取原始图像,所述原始图像为所述电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下采集到的图像,其次,确定所述原始图像中的至少两处光斑,然后,确定所述至少两处光斑中面积最大和次大的两个光斑为目标光斑,最后,根据所述目标光斑确定所述原始图像中的人眼区域。可见,本申请实施例的电子设备能够通过红外线IR灯照射佩戴眼镜的目标用户,并采集到电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下的原始图像,通过定位原始图像中的光斑确定人眼区域,拓展了人眼区域定位的机制,提升了人眼定位的效率。
此外,电子设备能够根据校正后的所述目标光斑确定两个参考点,进而根据两个参考点原始图像中的人眼区域,提高人眼区域定位的智能性。
与上述图1A、图2以及图3所示的实施例一致的,请参阅图4,图4是本申请实施例提供的一种电子设备400的结构示意图,如图所示,所述电子设备400包括应用处理器410、存储器420、通信接口430以及一个或多个程序421,其中,所述一个或多个程序421被存储在上述存储器420中,并且被配置由上述应用处理器410执行,所述一个或多个程序421包括用于执行以下步骤的指令;
获取原始图像,所述原始图像为所述电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下采集到的图像;
确定所述原始图像中的至少两处光斑;
确定所述至少两处光斑中面积最大和次大的两个光斑为目标光斑;
根据所述目标光斑确定所述原始图像中的人眼区域。
可以看出,本申请实施例中,电子设备首先获取原始图像,所述原始图像为所述电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下采集到的图像,其次,确定所述原始图像中的至少两处光斑,然后,确定所述至少两处光斑中面积最大和次大的两个光斑为目标光斑,最后,根据所述目标光斑确定所述原始图像中的人眼区域。可见,本申请实施例的电子设备能够通过红外线IR灯照射佩戴眼镜的目标用户,并采集到电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下的原始图像,通过定位原始图像中的光斑确定人眼区域,拓展了人眼区域定位的机制,提升了人眼定位的效率。
在一个可能的示例中,在所述根据所述目标光斑确定所述原始图像中的人眼区域方面,所述一个或多个程序421的指令具体用于:在检测到所述目标用户配戴眼镜歪斜时,确定所述目标用户配戴眼镜的歪斜度;若所述歪斜度大于或者等于预设歪斜度,则根据所述歪斜度对所述目标光斑进行校正;根据校正后的所述目标光斑确定所述原始图像中的人眼区域。
在一个可能的示例中,在所述根据校正后的所述目标光斑确定所述原始图像中的人眼区域方面,所述一个或多个程序421的指令具体用于:确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点;根据所述两个参考点确定所述原始图像中的人眼区域,所述人眼区域包括至少一个具有特定形状的区域。
在一个可能的示例中,在所述确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点方面,所述一个或多个程序421的指令具体用于:获取校正后的所述目标光斑中每个光斑的至少一个像素点对应的位置信息,所述位置信息包括横坐标和纵坐标;确定所述至少一个像素点对应的横坐标的平均值;确定所述至少一个像素点对应的纵坐标的平均值; 根据所述横坐标的平均值以及所述纵坐标的平均值确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点。
在一个可能的示例中,在所述根据所述目标光斑确定所述原始图像中的人眼区域方面,所述一个或多个程序421的指令具体用于:在检测到所述目标用户配戴眼镜未歪斜时,或者,在检测到所述目标用户配戴眼镜歪斜,且所述目标用户配戴眼镜的歪斜度小于预设歪斜度时,确定所述面积最大和次大的两个光斑中每个光斑的参考点,得到所述原始图像的两个参考点;根据所述两个参考点确定所述原始图像中的人眼区域,所述人眼区域包括至少一个具有特定形状的区域。
在一个可能的示例中,在所述根据所述两个参考点确定所述原始图像中的人眼区域方面,所述一个或多个程序421的指令具体用于:确定所述两个参考点的中点;以所述中点为几何中心确定所述原始图像中的人眼区域。
在一个可能的示例中,在所述根据所述两个参考点确定所述原始图像中的人眼区域方面,所述一个或多个程序421的指令具体用于:以所述两个参考点中第一参考点为第一中心点确定第一区域;以所述两个参考点中第二参考点为第二中心点确定第二区域,所述第一区域与所述第二区域为人眼区域。
在一个可能的示例中,在所述确定所述原始图像中的至少两处光斑方面,所述一个或多个程序421的指令具体用于:获取所述原始图像的多个像素点中每个像素点对应的灰度值;确定所述多个像素点中灰度值大于预设灰度阈值的多个目标像素点;根据所述多个目标像素点确定所述原始图像中的至少两处光斑。
上述主要从方法侧执行过程的角度对本申请实施例的方案进行了介绍。可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所提供的实施例描述的各示例的模块及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
图5是本申请实施例中所涉及的控制装置500的功能模块组成框图。该 控制装置500应用于电子设备,包括处理单元501和通信单元502,其中,
所述处理单元501,用于通过所述通信单元502获取原始图像,所述原始图像为所述电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下采集到的图像;以及用于确定所述原始图像中的至少两处光斑;以及用于确定所述至少两处光斑中面积最大和次大的两个光斑为目标光斑;以及用于根据所述目标光斑确定所述原始图像中的人眼区域。
其中,所述控制装置500还可以包括存储单元503,用于存储电子设备的程序代码和数据。所述处理单元501可以是处理器,所述通信单元502可以是触控显示屏或者收发器,存储单元503可以是存储器。
可以看出,本申请实施例中,电子设备首先获取原始图像,所述原始图像为所述电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下采集到的图像,其次,确定所述原始图像中的至少两处光斑,然后,确定所述至少两处光斑中面积最大和次大的两个光斑为目标光斑,最后,根据所述目标光斑确定所述原始图像中的人眼区域。可见,本申请实施例的电子设备能够通过红外线IR灯照射佩戴眼镜的目标用户,并采集到电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下的原始图像,通过定位原始图像中的光斑确定人眼区域,拓展了人眼区域定位的机制,提升了人眼定位的效率。
在一个可能的示例中,在所述根据所述目标光斑确定所述原始图像中的人眼区域方面,所述处理单元501具体用于:在检测到所述目标用户配戴眼镜歪斜时,确定所述目标用户配戴眼镜的歪斜度;若所述歪斜度大于或者等于预设歪斜度,则根据所述歪斜度对所述目标光斑进行校正;根据校正后的所述目标光斑确定所述原始图像中的人眼区域。
在一个可能的示例中,在所述根据校正后的所述目标光斑确定所述原始图像中的人眼区域方面,所述处理单元501具体用于:确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点;根据所述两个参考点确定所述原始图像中的人眼区域,所述人眼区域包括至少一个具有特定形状的区域。
在一个可能的示例中,在所述确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点方面,所述处理单元501具体用于:获取校正后的所述目标光斑中每个光斑的至少一个像素点对应的位置信息,所述位置信息包括横坐标和纵坐标;确定所述至少一个像素点对应的横坐标的平均值;确定所述至少一个像素点对应的纵坐标的平均值;根据所述横坐标的平均值以及所述纵坐标的平均值确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点。
在一个可能的示例中,在所述根据所述目标光斑确定所述原始图像中的人眼区域方面,所述处理单元501具体用于:在检测到所述目标用户配戴眼 镜未歪斜时,或者,在检测到所述目标用户配戴眼镜歪斜,且所述目标用户配戴眼镜的歪斜度小于预设歪斜度时,确定所述面积最大和次大的两个光斑中每个光斑的参考点,得到所述原始图像的两个参考点;根据所述两个参考点确定所述原始图像中的人眼区域,所述人眼区域包括至少一个具有特定形状的区域。
在一个可能的示例中,在所述根据所述两个参考点确定所述原始图像中的人眼区域方面,所述处理单元501具体用于:确定所述两个参考点的中点;以所述中点为几何中心确定所述原始图像中的人眼区域。
在一个可能的示例中,在所述根据所述两个参考点确定所述原始图像中的人眼区域方面,所述处理单元501具体用于:以所述两个参考点中第一参考点为第一中心点确定第一区域;以所述两个参考点中第二参考点为第二中心点确定第二区域,所述第一区域与所述第二区域为人眼区域。
在一个可能的示例中,在所述确定所述原始图像中的至少两处光斑方面,所述处理单元501具体用于:获取所述原始图像的多个像素点中每个像素点对应的灰度值;确定所述多个像素点中灰度值大于预设灰度阈值的多个目标像素点;根据所述多个目标像素点确定所述原始图像中的至少两处光斑。
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质存储用于电子数据交换的计算机程序,该计算机程序使得计算机执行如上述方法实施例中记载的任一方法的部分或全部步骤,上述计算机包括电子设备。
本申请实施例还提供一种计算机程序产品,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如上述方法实施例中记载的任一方法的部分或全部步骤。该计算机程序产品可以为一个软件安装包,上述计算机包括电子设备。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如上述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以 是电性或其它的形式。
上述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
上述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例上述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(ROM,Read-OnlyMemory)、随机存取存储器(RAM,RandomAccessMemory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、只读存储器(英文:Read-OnlyMemory,简称:ROM)、随机存取器(英文:RandomAccessMemory,简称:RAM)、磁盘或光盘等。
以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有该变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种人眼区域定位方法,其特征在于,应用于电子设备,所述方法包括:
    获取原始图像,所述原始图像为所述电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下采集到的图像;
    确定所述原始图像中的至少两处光斑;
    确定所述至少两处光斑中面积最大和次大的两个光斑为目标光斑;
    根据所述目标光斑确定所述原始图像中的人眼区域。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述目标光斑确定所述原始图像中的人眼区域,包括:
    在检测到所述目标用户配戴眼镜歪斜时,确定所述目标用户配戴眼镜的歪斜度;
    若所述歪斜度大于或者等于预设歪斜度,则根据所述歪斜度对所述目标光斑进行校正;
    根据校正后的所述目标光斑确定所述原始图像中的人眼区域。
  3. 根据权利要求2所述的方法,其特征在于,所述根据校正后的所述目标光斑确定所述原始图像中的人眼区域,包括:
    确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点;
    根据所述两个参考点确定所述原始图像中的人眼区域,所述人眼区域包括至少一个具有特定形状的区域。
  4. 根据权利要求3所述的方法,其特征在于,所述确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点,包括:
    获取校正后的所述目标光斑中每个光斑的至少一个像素点对应的位置信息,所述位置信息包括横坐标和纵坐标;
    确定所述至少一个像素点对应的横坐标的平均值;
    确定所述至少一个像素点对应的纵坐标的平均值;
    根据所述横坐标的平均值以及所述纵坐标的平均值确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点。
  5. 根据权利要求1所述的方法,其特征在于,所述根据所述目标光斑确定所述原始图像中的人眼区域,包括:
    在检测到所述目标用户配戴眼镜未歪斜时,或者,在检测到所述目标用户配戴眼镜歪斜,且所述目标用户配戴眼镜的歪斜度小于预设歪斜度时,确定所述面积最大和次大的两个光斑中每个光斑的参考点,得到所述原始图像的两个参考点;
    根据所述两个参考点确定所述原始图像中的人眼区域,所述人眼区域包括至少一个具有特定形状的区域。
  6. 根据权利要求3-5任一项所述的方法,其特征在于,所述根据所述两个 参考点确定所述原始图像中的人眼区域,包括:
    确定所述两个参考点的中点;
    以所述中点为几何中心确定所述原始图像中的人眼区域。
  7. 根据权利要求3-5任一项所述的方法,其特征在于,所述根据所述两个参考点确定所述原始图像中的人眼区域,包括:
    以所述两个参考点中第一参考点为第一中心点确定第一区域;
    以所述两个参考点中第二参考点为第二中心点确定第二区域,所述第一区域与所述第二区域为人眼区域。
  8. 根据权利要求1-5任一项所述的方法,其特征在于,所述确定所述原始图像中的至少两处光斑,包括:
    获取所述原始图像的多个像素点中每个像素点对应的灰度值;
    确定所述多个像素点中灰度值大于预设灰度阈值的多个目标像素点;
    根据所述多个目标像素点确定所述原始图像中的至少两处光斑。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述获取原始图像,包括:
    获取通过红外线IR灯照射佩戴眼镜的目标用户状态下的至少一张第一图像;确定所述至少一张第一图像中目标用户的眼睛处于睁开状态的m张第二图像,所述m为大于1的正整数;确定所述m张第二图像中无用户的手部遮挡眼镜的n张第三图像,所述n为大于1小于m的正整数;确定所述n张第三图像中用户的任意一张第三图像为所述原始图像。
  10. 一种人眼区域定位装置,其特征在于,所述装置包括处理单元和通信单元,其中,所述处理单元用于:
    通过所述通信单元获取原始图像,所述原始图像为所述电子设备通过红外线IR灯照射佩戴眼镜的目标用户状态下采集到的图像;
    确定所述原始图像中的至少两处光斑确定所述至少两处光斑中面积最大和次大的两个光斑为目标光斑;
    根据所述目标光斑确定所述原始图像中的人眼区域。
  11. 根据权利要求10所述的装置,其特征在于,所述根据所述目标光斑确定所述原始图像中的人眼区域,所述处理单元具体用于:
    在检测到所述目标用户配戴眼镜歪斜时,确定所述目标用户配戴眼镜的歪斜度;
    若所述歪斜度大于或者等于预设歪斜度,则根据所述歪斜度对所述目标光斑进行校正;
    根据校正后的所述目标光斑确定所述原始图像中的人眼区域。
  12. 根据权利要求11所述的装置,其特征在于,所述根据校正后的所述目标光斑确定所述原始图像中的人眼区域,所述处理单元具体用于:
    确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点;
    根据所述两个参考点确定所述原始图像中的人眼区域,所述人眼区域包括至少一个具有特定形状的区域。
  13. 根据权利要求12所述的装置,其特征在于,所述确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点,所述处理单元具体用于:
    获取校正后的所述目标光斑中每个光斑的至少一个像素点对应的位置信息,所述位置信息包括横坐标和纵坐标;
    确定所述至少一个像素点对应的横坐标的平均值;
    确定所述至少一个像素点对应的纵坐标的平均值;
    根据所述横坐标的平均值以及所述纵坐标的平均值确定校正后的所述目标光斑中每个光斑的参考点,得到所述原始图像的两个参考点。
  14. 根据权利要求10所述的装置,其特征在于,所述根据所述目标光斑确定所述原始图像中的人眼区域,所述处理单元具体用于:
    在检测到所述目标用户配戴眼镜未歪斜时,或者,在检测到所述目标用户配戴眼镜歪斜,且所述目标用户配戴眼镜的歪斜度小于预设歪斜度时,确定所述面积最大和次大的两个光斑中每个光斑的参考点,得到所述原始图像的两个参考点;
    根据所述两个参考点确定所述原始图像中的人眼区域,所述人眼区域包括至少一个具有特定形状的区域。
  15. 根据权利要求12-14任一项所述的装置,其特征在于,所述根据所述两个参考点确定所述原始图像中的人眼区域,所述处理单元具体用于:
    确定所述两个参考点的中点;
    以所述中点为几何中心确定所述原始图像中的人眼区域。
  16. 根据权利要求12-14任一项所述的装置,其特征在于,所述根据所述两个参考点确定所述原始图像中的人眼区域,所述处理单元具体用于:
    以所述两个参考点中第一参考点为第一中心点确定第一区域;
    以所述两个参考点中第二参考点为第二中心点确定第二区域,所述第一区域与所述第二区域为人眼区域。
  17. 根据权利要求10-14任一项所述的装置,其特征在于,所述确定所述原始图像中的至少两处光斑,所述处理单元具体用于:
    获取所述原始图像的多个像素点中每个像素点对应的灰度值;
    确定所述多个像素点中灰度值大于预设灰度阈值的多个目标像素点;
    根据所述多个目标像素点确定所述原始图像中的至少两处光斑。
  18. 根据权利要求10-17任一项所述的方装置,其特征在于,所述获取原始图像,所述处理单元具体用于:
    获取通过红外线IR灯照射佩戴眼镜的目标用户状态下的至少一张第一图像;确定所述至少一张第一图像中目标用户的眼睛处于睁开状态的m张第二图像,所述m为大于1的正整数;确定所述m张第二图像中无用户的手部遮挡眼 镜的n张第三图像,所述n为大于1小于m的正整数;确定所述n张第三图像中用户的任意一张第三图像为所述原始图像。
  19. 一种电子设备,其特征在于,包括处理器、存储器、通信接口,以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置由所述处理器执行,所述程序包括用于执行如权利要求1-9任一项所述的方法中的步骤的指令。
  20. 一种计算机可读存储介质,其特征在于,存储用于电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行如权利要求1-9任一项所述的方法。
PCT/CN2020/113541 2019-09-30 2020-09-04 人眼区域定位方法、相关装置 WO2021063155A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910952207.1A CN112580413A (zh) 2019-09-30 2019-09-30 人眼区域定位方法、相关装置
CN201910952207.1 2019-09-30

Publications (1)

Publication Number Publication Date
WO2021063155A1 true WO2021063155A1 (zh) 2021-04-08

Family

ID=75117223

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/113541 WO2021063155A1 (zh) 2019-09-30 2020-09-04 人眼区域定位方法、相关装置

Country Status (2)

Country Link
CN (1) CN112580413A (zh)
WO (1) WO2021063155A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291979A (zh) * 2023-09-26 2023-12-26 北京鹰之眼智能健康科技有限公司 一种耳洞定位方法、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03165735A (ja) * 1989-11-27 1991-07-17 Topcon Corp 眼科装置
CN102902970A (zh) * 2011-12-16 2013-01-30 北京天诚盛业科技有限公司 一种虹膜定位方法
CN106778641A (zh) * 2016-12-23 2017-05-31 北京七鑫易维信息技术有限公司 视线估计方法及装置
CN108921097A (zh) * 2018-07-03 2018-11-30 深圳市未来感知科技有限公司 人眼视角检测方法、装置及计算机可读存储介质
CN109063674A (zh) * 2018-08-22 2018-12-21 深圳先牛信息技术有限公司 一种基于眼球上光斑的虹膜活体检测方法及检测装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678286B (zh) * 2016-02-29 2019-03-15 徐鹤菲 一种瞳孔定位方法及设备
JP6816913B2 (ja) * 2016-08-24 2021-01-20 富士通コネクテッドテクノロジーズ株式会社 携帯機器、認証方法および認証プログラム
CN108259758B (zh) * 2018-03-18 2020-10-09 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质和电子设备
CN108427938A (zh) * 2018-03-30 2018-08-21 广东欧珀移动通信有限公司 图像处理方法、装置、存储介质和电子设备
CN109034023A (zh) * 2018-07-13 2018-12-18 中国科学院深圳先进技术研究院 一种眼动数据确定方法、装置、设备及存储介质
CN110245601B (zh) * 2019-06-11 2022-03-01 Oppo广东移动通信有限公司 眼球追踪方法及相关产品

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03165735A (ja) * 1989-11-27 1991-07-17 Topcon Corp 眼科装置
CN102902970A (zh) * 2011-12-16 2013-01-30 北京天诚盛业科技有限公司 一种虹膜定位方法
CN106778641A (zh) * 2016-12-23 2017-05-31 北京七鑫易维信息技术有限公司 视线估计方法及装置
CN108921097A (zh) * 2018-07-03 2018-11-30 深圳市未来感知科技有限公司 人眼视角检测方法、装置及计算机可读存储介质
CN109063674A (zh) * 2018-08-22 2018-12-21 深圳先牛信息技术有限公司 一种基于眼球上光斑的虹膜活体检测方法及检测装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291979A (zh) * 2023-09-26 2023-12-26 北京鹰之眼智能健康科技有限公司 一种耳洞定位方法、电子设备及存储介质
CN117291979B (zh) * 2023-09-26 2024-04-26 北京鹰之眼智能健康科技有限公司 一种耳洞定位方法、电子设备及存储介质

Also Published As

Publication number Publication date
CN112580413A (zh) 2021-03-30

Similar Documents

Publication Publication Date Title
US20190236404A1 (en) Image processing apparatus image processing method and storage medium for lighting processing on image using model data
US20190197735A1 (en) Method and apparatus for image processing, and robot using the same
KR101802106B1 (ko) 컨텐츠 필터링에 기초한 구조화된 광 3차원 (3d) 깊이 맵
WO2020010848A1 (zh) 控制方法、微处理器、计算机可读存储介质及计算机设备
JP6459194B2 (ja) プロジェクター、及び投写画像制御方法
US10349023B2 (en) Image processing apparatus and method
US9817628B2 (en) Display system, display terminal, display method and computer readable recording medium having program thereof
US10319081B2 (en) Distortion rectification method and terminal
US20170118451A1 (en) Information processing apparatus, image projection system, and computer program product
JP2008152622A (ja) ポインティング装置
US20190149787A1 (en) Projection system and image projection method
US10218439B2 (en) Optical communication device, optical communication method, and non-transitory recording medium
JP2010160792A (ja) 赤目を検出する方法、コンピューター読み取り可能な媒体および画像処理装置
CN108063928A (zh) 一种投影仪的成像图像自动调整方法、装置及电子设备
WO2021063155A1 (zh) 人眼区域定位方法、相关装置
US11523056B2 (en) Panoramic photographing method and device, camera and mobile terminal
JP2004502212A (ja) 画像処理による赤目修正
CN105827988A (zh) 一种移动终端拍摄时的光线控制方法和装置
US11514608B2 (en) Fisheye camera calibration system, method and electronic device
CN107832029B (zh) 显示处理方法及装置、终端、计算机可读存储介质
WO2019054204A1 (ja) 画像処理装置および方法
CN116051439A (zh) 一种利用红外图去除屏下rgb图像彩虹状眩光方法、设备及存储介质
CN107888829A (zh) 移动终端的对焦方法、移动终端及存储介质
CN107577340A (zh) 视力保护的方法、终端及存储装置
JP2022543158A (ja) キャリブレーションパラメータの取得方法、装置、プロセッサ及び電子機器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20870957

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20870957

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20870957

Country of ref document: EP

Kind code of ref document: A1