WO2024125217A1 - 一种光斑追踪方法、装置、电子设备及存储介质 - Google Patents

一种光斑追踪方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2024125217A1
WO2024125217A1 PCT/CN2023/132630 CN2023132630W WO2024125217A1 WO 2024125217 A1 WO2024125217 A1 WO 2024125217A1 CN 2023132630 W CN2023132630 W CN 2023132630W WO 2024125217 A1 WO2024125217 A1 WO 2024125217A1
Authority
WO
WIPO (PCT)
Prior art keywords
light spot
eye image
detection area
local detection
frame
Prior art date
Application number
PCT/CN2023/132630
Other languages
English (en)
French (fr)
Inventor
王辉
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2024125217A1 publication Critical patent/WO2024125217A1/zh

Links

Definitions

  • the present disclosure relates to the field of image processing technology, and in particular to a light spot tracking method, device, electronic device and storage medium.
  • the VR device When measuring the pupil distance, it is necessary to accurately detect the position of the light spot on the iris, and then use the light spot position to determine the pupil distance parameter, so that the VR device can adjust the distance between the two lenses based on the pupil distance parameter, thereby ensuring that the user has a good visual experience.
  • a spot tracking method comprising:
  • a light spot recognition result of the eye image of the frame to be detected is determined, and the first target position is the position with the highest brightness in the local detection area.
  • a spot tracking device comprising:
  • An extraction module used for extracting a target frame eye image and a to-be-detected frame eye image from an eye image sequence, wherein the target frame eye image contains a light spot, and the acquisition time of the target frame eye image is earlier than the acquisition time of the to-be-detected frame eye image;
  • An acquisition module configured to acquire a local detection area from the eye image of the frame to be detected based on the light spot
  • the recognition module is used to determine the light spot recognition result of the eye image of the frame to be detected in response to the pixel brightness judgment result of the first target position located in the local detection area, and the first target position is the position with the highest brightness in the local detection area.
  • an electronic device comprising:
  • a memory for storing programs
  • the program includes instructions, and when the instructions are executed by the processor, the processor executes the method according to the exemplary embodiment of the present disclosure.
  • a non-transitory computer-readable storage medium stores computer instructions, wherein the computer instructions are used to cause the computer to execute the method described in the embodiments according to the present disclosure.
  • FIG1 shows a flow chart of a spot tracking method according to an exemplary embodiment of the present disclosure
  • FIG2A is a schematic diagram showing a principle of light spot recognition and judgment according to an exemplary embodiment of the present disclosure
  • FIG2B shows another schematic diagram of the light spot recognition principle of an exemplary embodiment of the present disclosure
  • FIG3A shows a schematic diagram 1 of a spot tracking process of an eye video sequence according to an exemplary embodiment of the present disclosure
  • FIG3B shows a second schematic diagram of a spot tracking process of an eye video sequence according to an exemplary embodiment of the present disclosure
  • FIG4 shows a schematic diagram of a coordinate system of a local detection area of an exemplary embodiment of the present disclosure
  • FIG5 shows a functional schematic block diagram of a spot tracking device according to an exemplary embodiment of the present disclosure
  • FIG6 shows a schematic block diagram of a chip according to an exemplary embodiment of the present disclosure
  • FIG. 7 shows a block diagram of an exemplary electronic device that can be used to implement an embodiment of the present disclosure.
  • the device For display devices, such as virtual reality (VR) glasses, the device needs to adjust the pupil distance for different users. Only with the appropriate pupil distance can the appropriate object distance and image distance be obtained, and only with the appropriate object distance and image distance can the user see the image on the display device screen clearly. Therefore, in order to obtain accurate pupil distance, it is necessary to accurately detect and locate the position of the light spot on the iris. The position of the light spot is crucial to determine the pupil distance parameters.
  • VR virtual reality
  • the position of the light spot can be detected in the entire eye image by any method such as pixel brightness retrieval, whole image gradient, image heat map, etc. It can be seen that when performing light spot detection, a global search of the eye image is required. This operation leads to a large amount of device calculation and high power consumption. Especially for mobile devices, high power consumption means fast battery consumption, which is not conducive to battery life and causes the mobile device to easily heat up.
  • an exemplary embodiment of the present disclosure provides a light spot tracking method, which utilizes the inter-frame movement characteristics of the light spot to perform local light spot detection on the eye image of the frame to be detected while ensuring the accuracy of light spot detection, thereby avoiding global search for the light spot in the image, thereby reducing the amount of calculation and device power consumption, and improving the battery life of the mobile device.
  • One or more technical solutions provided in the embodiments of the present disclosure take into account the short distance of the light spot moving between frames, and obtain a local detection area from the eye image of the frame to be detected based on the light spot contained in the eye image of the target frame, thereby reducing the light spot detection range of the eye image of the target frame.
  • the position with the highest brightness in the local detection area can be set as the first target position, and then the light spot recognition result of the eye image of the frame to be detected can be determined based on the pixel brightness judgment result of the first target position, so as to determine whether the light spot is tracked in the eye image of the frame to be detected.
  • the method of the exemplary embodiment of the present disclosure performs light spot tracking, combined with the inter-frame movement characteristics of the light spot, the local light spot detection is performed on the eye image of the frame to be detected while ensuring the accuracy of the light spot detection, thereby avoiding the use of a global search method to perform light spot detection on the image. Based on this, the method of the exemplary embodiment of the present disclosure can reduce the amount of calculation and device power consumption while ensuring the accuracy of the light spot detection, and improve the endurance of the mobile device.
  • the method of the exemplary embodiment of the present disclosure can be performed by an electronic device or a chip of an electronic device, and the electronic device has a camera, a video camera or an image acquisition function.
  • the electronic device can be a smart phone (such as an Android phone, an iOS phone), a wearable device, an AR (augmented reality) ⁇ VR (virtual reality) device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), a tablet computer, a handheld computer, and a mobile Internet device (mobile internet devices, MID) and other devices using flash memory devices.
  • the electronic device is equipped with a camera, which can be a monocular camera, a binocular camera, etc., and the image collected can be a grayscale image, a color image, a thermal image, etc., but is not limited to this.
  • FIG1 shows a flow chart of a light spot tracking method according to an exemplary embodiment of the present disclosure.
  • the light spot tracking method of the exemplary embodiment of the present disclosure may include:
  • Step 101 extracting a target frame eye image and a frame eye image to be detected from an eye image sequence, wherein the target frame eye image contains a light spot, and the acquisition time of the target frame eye image is earlier than the acquisition time of the frame eye image to be detected.
  • the number of the light spots here can be one or more.
  • the target frame eye image of the exemplary embodiment of the present disclosure may be the first frame eye image of the eye image sequence, or other frame eye images.
  • the target frame eye image and the frame eye image to be detected may be adjacent frame eye images, or alternate frame eye images.
  • the target frame eye image may be the kth frame eye image, the adjacent frame is the k+1th frame eye image, and k is an integer greater than or equal to 1.
  • the target frame eye image and the frame eye image to be detected are interval frame eye images
  • the target frame eye image may be the kth frame eye image
  • the frame eye image to be detected is the k+rth frame eye image
  • k is an integer greater than or equal to 1
  • r is an integer greater than or equal to 2.
  • r 2 ⁇ 4.
  • the target frame image and the frame image to be detected are two frames in the eye image sequence collected when the eyeball moves in a single direction, so as to facilitate the determination of the local detection area in the region of the frame eye image to be detected according to the inter-frame movement characteristics, ensure the accuracy of the determined local detection area, and improve the success rate of spot recognition.
  • the eye image sequence of the exemplary embodiment of the present disclosure may include an image sequence of multiple frames of eye images, for example, a video file including the eyes of the target object acquired by an image acquisition device.
  • the eye image of the exemplary embodiment of the present disclosure may be a human eye image or an animal image, and the eye image sequence may contain only eyes, facial images including eyes, or full-body images including eyes, etc.
  • the target frame eye image can be grayscaled and then the target frame eye image can be detected by light spot detection.
  • the light spot detection method can be any method such as pixel brightness retrieval, whole image gradient, image heat map, etc., which is not limited in the present disclosure.
  • Step 102 Obtain a local detection area from the eye image of the frame to be detected based on the light spot.
  • multiple local detection areas can be obtained so that each light spot can obtain a corresponding local detection area in the eye image of the frame to be detected.
  • the light spots of the eye image of the target frame correspond to the local detection areas of the eye image of the frame to be detected, that is, the number of light spots of the eye image of the target frame is the same as the number of local detection areas of the eye image of the frame to be detected.
  • the geometric center of the local detection area may coincide with the center of the light spot corresponding to the eye image of the target frame, wherein the center of the light spot may be the geometric center of the light spot or the position with the highest brightness in the light spot.
  • the geometric center of the local detection area coincides with the geometric center of the light spot
  • the geometric center of the local detection area can be determined on the eye image of the frame to be detected according to the geometric center coordinates of the light spot, so that the geometric center coordinates of the local detection area coincide with the geometric center coordinates of the light spot, and then the size of the local detection area is determined based on the size of the light spot, so that the size of the local detection area is larger than the size of the light spot.
  • the geometric center of the light spot after the light spot is detected in the eye image of the target frame, Taking the geometric center of the light spot as the center, the area covered by the light spot is expanded based on the characteristics of the light spot moving between frames, so as to obtain the size of the local detection area.
  • the size of the local detection area is larger than the size of the light spot, which can be: the area defined by the edge of the light spot is located within the area defined by the edge of the corresponding local detection area, and the area defined by the edge of the light spot is smaller than the area defined by the edge of the corresponding local detection area.
  • the light spot and the local detection area can both be circular, in which case the radial size of the local detection area can be 1 pixel to 3 pixels larger than the radial size of the light spot.
  • the position with the highest brightness in the light spot included in the target frame eye image (defined as the brightness center) can be determined, and then the geometric center of the corresponding local detection area can be determined in the frame eye image to be detected based on the coordinates of the brightness center, so that the coordinates of the geometric center of the local detection area coincide with the coordinates of the brightness center, and then based on the characteristics of the light spot moving between frames, the size of the local detection area is set to determine the area occupied by the local detection area in the frame eye image to be detected.
  • Step 103 In response to the pixel brightness judgment result of the first target position in the local detection area, determine the spot recognition result of the eye image of the frame to be detected, and the first target position is the position with the highest brightness in the local detection area.
  • the exemplary embodiments of the present disclosure can determine whether the pixel brightness of the first target position located in the local detection area meets the light spot center constraint condition, thereby obtaining a pixel brightness judgment result. If the pixel brightness of the first target position located in the local detection area meets the light spot center constraint condition, it can be determined that the light spot recognition result includes: the local detection area and the light spot to be detected have an intersection, wherein the light spot to be detected is the light spot of the target frame eye image tracked in the frame eye image to be detected.
  • the first target position can correspond to a pixel in the local detection area, or it can correspond to multiple pixels in the local detection area, which is related to the actual method for determining the first target position.
  • the grayscale centroid of the local detection area can be used as the first target position, and the first target position corresponds to a pixel located at the grayscale centroid coordinate position in the local detection area.
  • the local detection area here corresponds to the light spot of the target frame eye image, and the corresponding multiple local detection areas can be obtained from the frame eye image to be detected based on the multiple light spots in the target frame eye image, wherein the local detection area represents the possible position of the corresponding light spot of the target frame eye image in the frame eye image to be detected.
  • the method of the exemplary embodiment of the present disclosure further includes: storing the center coordinates of the multiple spots of the target frame eye image in a spot sequence array; if the first target position located in the local detection area meets the spot center constraint condition, using the coordinates of the first target position to update the spot center coordinates corresponding to the local detection area in the spot sequence array; if the first target position located in the local detection area does not meet the spot center constraint condition, using an empty array to occupy the position corresponding to the local detection area in the spot sequence array to achieve the spot Update of a sequence array.
  • the light spot sequence array of the target frame eye image is updated in the above manner, it can be ensured that the light spot order of the light spot sequence array of the frame eye image to be detected is the same as the light spot order of the light spot sequence array of the target frame eye image.
  • the light spot center coordinates of each position of the light spot sequence array of the target frame eye image are determined, if the local detection area obtained from the frame eye image to be detected based on the light spot center coordinates contains a real light spot, then the light spot center coordinates in the light spot sequence array can be replaced with the coordinates of the first target position of the local detection area, and if the local detection area obtained from the frame eye image to be detected based on the light spot center coordinates does not contain a real light spot, then the light spot center coordinates in the light spot sequence array of the target frame eye image can be replaced by an empty array.
  • the spot sequence array of the eye image to be detected actually reflects the spot tracking result of the target frame eye image. Therefore, it can be determined whether the corresponding spot is tracked in the eye image to be detected according to whether each position in the spot sequence array corresponding to the eye image to be detected is an empty array, thereby achieving the purpose of tracking the specified spot. For example: if the array of a certain position in the spot sequence array of the eye image to be detected is an empty array, it means that the spot corresponding to the same position in the spot sequence array of the eye image of the target frame has not been tracked in the eye image to be detected.
  • the above-mentioned eye image sequence is collected by an image acquisition device.
  • the exemplary embodiment of the present disclosure may include: determining the size parameter of the local detection area based on the acquisition frame rate of the image acquisition device, and determining the local detection area based on the size parameter of the light spot center and the local detection area. It should be understood that the unit pixel size is determined by the photosensitivity parameter of the image acquisition device.
  • the inter-frame light spot movement distance can be determined in a statistical manner, and then the size parameters of the local detection area are determined based on the inter-frame light spot movement distance and the unit pixel size of the eye image in the eye image sequence.
  • the target frame eye image and the frame eye image to be detected can be defined as inter-frame
  • the inter-frame light spot movement distance can refer to the movement distance of the light spot from the target frame eye image to the start of the frame eye image to be detected.
  • the acquisition frame rate when the acquisition frame rate is higher, the acquisition interval between adjacent frames of eye images is shorter, so that the inter-frame light spot movement distance is smaller.
  • the acquisition frame rate when the acquisition frame rate is lower, the acquisition interval between adjacent frames of eye images is longer, so that the inter-frame light spot movement distance is larger. It can be seen that the size of the local detection area is positively correlated with the inter-frame light spot movement distance of the eye image sequence.
  • the size parameters of the local detection area of the exemplary embodiment of the present disclosure can not only determine the distance of the light spot movement between frames based on the acquisition frame rate of the image acquisition device, but also simultaneously combine the geometric shape of the local detection area and the acquisition frame rate of the image acquisition device to determine the size parameters of the local detection area or the coverage range of the eye image in the frame to be detected.
  • the size parameters may include radial dimensions, such as a radius or a diameter.
  • the size parameters include diagonal parameters, such as diagonal length or diagonal vertex coordinates.
  • one frame of eye image can be captured every millisecond, so that the distance of the center of the light spot moving from the target frame eye image to the frame eye image to be detected is relatively short.
  • the center of the light spot of the target frame eye image can be set as the geometric center of the local detection area.
  • the position and coverage area of the local detection area can be accurately located.
  • the local detection area belongs to the local area of the frame eye image to be detected, so that when performing light spot detection on the frame eye image to be detected, it is not necessary to detect all pixels of the frame eye image to be detected, thereby reducing the calculation amount of light spot detection and the power consumption of the device.
  • the ratio of the inter-frame light spot movement distance and the unit pixel size can be used to determine the number of pixels from the geometric center of the local detection area to the edge of the local detection area. If the ratio of the inter-frame light spot movement distance and the unit pixel size is a decimal, the ratio of the inter-frame light spot movement distance and the unit pixel size can be rounded up to make the range of the local detection area slightly larger than the corresponding light spot range in the target frame eye image, thereby avoiding errors in the subsequent confirmation of whether the local detection area contains a true light spot.
  • the size parameter of the local detection area is greater than or equal to N
  • N represents the minimum number of pixels from the geometric center of the local detection area to the edge of the local detection area
  • d0 represents the inter-frame spot movement distance
  • dpix represents the unit pixel size of the eye image sequence
  • the spot center constraint condition of the exemplary embodiment of the present disclosure includes: the difference between the brightness of the first target position and the target brightness is greater than or equal to a brightness threshold, wherein the target brightness is determined by the brightness of at least one second target position, and the second target position is located at the edge of the local detection area.
  • the brightness threshold can be used as a basis to determine whether the brightness of the local detection area is significantly higher than the brightness of the surrounding area of the local detection area. It should be understood that the brightness threshold can be set according to actual conditions.
  • the difference between the pixel brightness of the first target position and the target brightness is greater than or equal to the brightness threshold, it means that the brightness of the local detection area is particularly high and is significantly higher than the brightness of the surrounding areas of the local detection area. Therefore, the light spot of the target frame eye image corresponding to the local detection area is tracked on the eye image of the frame to be detected; if the difference between the first target position and the target brightness is less than the brightness threshold, it means that the brightness of the local detection area is relatively low and is significantly lower than the brightness of the surrounding areas of the local detection area. It is not a real light spot, and the light spot of the target frame eye image corresponding to the local detection area disappears on the eye image of the frame to be detected.
  • the target brightness may be determined based on the pixel brightness of the multiple second target positions, for example, the target brightness may be based on the average pixel brightness of the multiple second target positions.
  • the average pixel brightness of the multiple second target positions may be set as the target brightness, and then it is determined whether the brightness of the first target position is greater than the target brightness to determine whether the pixel brightness of the first target position satisfies the light spot center constraint condition, thereby avoiding detection errors caused by excessive brightness of local pixels in the non-local detection area.
  • the pixels at the second target position in the eye image of the frame to be detected can be located in all directions of the pixels at the first target position, such as above, below, left and right, and of course other directions.
  • the shape of the local detection area can be circular, elliptical, rectangular, irregular, etc. The following is an example of a circle.
  • Fig. 2A shows a schematic diagram of a light spot recognition principle of an exemplary embodiment of the present disclosure.
  • the local detection area 200 is a rectangular local detection area
  • the pixels of the first target position are represented by O
  • the pixels of the second target position are represented by P1 .
  • the pixel O at the first target position coincides with the pixel where the geometric center of the local detection area is located
  • the pixel P1 at the second target position is located at the pixel where the edge line of the local detection area 300 passes (refer to Figure 3A), or it can be located in the non-local detection area outside the local detection area 300 (not shown), and is adjacent to (not shown) or not adjacent to (not shown) the pixel where the contour line of the local detection area 300 is located.
  • the target brightness is the brightness of pixel O corresponding to the first target position, and it can be calculated whether the brightness difference between pixel O and pixel P1 is greater than or equal to the brightness threshold. If it is greater than or equal to the brightness threshold, it can be determined that the pixel brightness at the first target position satisfies the light spot center constraint; otherwise, it does not satisfy the light spot center constraint.
  • FIG2B shows another schematic diagram of the light spot recognition principle of an exemplary embodiment of the present disclosure.
  • the local detection area 200 is a rectangular local detection area
  • the pixel of the first target position is represented by O
  • the number of the second target positions is four, which are the upper pixel P1 located above the pixel O of the first target position, the lower pixel P2 located below the pixel O of the first target position, the left pixel P3 located to the left of the pixel O of the first target position, and the right pixel P4 located to the right of the pixel O of the first target position.
  • the upper pixel P1 , the lower pixel P2 , the left pixel P3 , and the right pixel P4 can be located at the pixels where the edge line of the local detection area 200 passes, or can be located in a non-local detection area (not shown) outside the local detection area 300, and are adjacent to (not shown) or not adjacent to (not shown) the pixels where the contour line of the local detection area 300 is located.
  • the average of the pixel brightness of the upper pixel P1 , the pixel brightness of the lower pixel P2 , the pixel brightness of the left pixel P3 , and the pixel brightness of the right pixel P4 can be calculated.
  • the average value is taken as the target brightness, and the difference between the brightness of pixel O and the target brightness is calculated to see whether it is greater than or equal to the brightness threshold. If it is greater than or equal to the brightness threshold, it can be determined that the pixel brightness at the first target position meets the light spot center constraint condition, otherwise it does not meet the light spot center constraint condition.
  • the principle shown in FIG. 2B is used for light spot recognition, if the brightness of one of the upper pixel P1 , the lower pixel P2 , the left pixel P3 and the right pixel P4 is abnormally high, the brightness of the pixels in multiple directions can be integrated by averaging the other three to accurately determine whether the local detection area 200 is a real light spot.
  • the exemplary embodiments of the present disclosure may detect a local detection area based on one or more light spot detection algorithms to obtain the pixel brightness of the first target position.
  • the light spot detection algorithm may be any method such as pixel brightness retrieval, whole image gradient, image heat map, etc.
  • light spot detection is performed using methods such as grayscale centroid method, Hough circle detection method, circle fitting method, etc., but is not limited thereto.
  • the measured brightness is often expressed in the form of grayscale values, and other representation methods may also be used, but in any case, it should be able to reflect the brightness.
  • a light spot detection algorithm may be used to obtain the pixel brightness of the first target position in the local detection area. At this time, the pixel brightness of the first target position may be subtracted from the target brightness, and then the subtraction result may be compared with the brightness threshold to perform light spot recognition.
  • the pixel brightness of the first target position of the exemplary embodiment of the present disclosure may be a weighted value of the pixel brightness of multiple first target positions.
  • multiple light spot detection algorithms may be used to detect the local detection area to obtain the pixel brightness of multiple first target positions, and then the pixel brightness of multiple first target positions may be weighted to obtain the brightness weighted value of the first target position.
  • the brightness weighted value of the first target position may be subtracted from the target brightness, and then the subtraction result may be compared with the brightness threshold to perform light spot recognition.
  • a weight prediction model can be trained based on historical data to predict the corresponding weights of various spot detection algorithms, and the corresponding weights of various spot detection algorithms can also be set according to actual experience, thereby reducing the error of the final calculated value to more accurately determine whether the tracked spot is a real spot.
  • the weight prediction model can be one or more, and one weight prediction model can predict the corresponding weights of one or more spot detection algorithms.
  • a camera with a resolution of 400*400 and a frame rate of 30 frames/s is used to collect an eye image sequence that requires spot detection.
  • Spot detection is performed on the first frame of the eye image sequence (that is, the target eye image) to obtain the coordinates of the center of each spot.
  • These coordinates of the center of each spot can be considered as the starting coordinates of each spot in the eye image sequence, and the coordinates of the center of each spot are saved as a spot sequence array.
  • FIG3A shows a schematic diagram of a spot tracking process of an eye video sequence according to an exemplary embodiment of the present disclosure.
  • a first spot 301 there are a first spot 301, a second spot 302, and a third spot 303 on the surface of the eyeball of the first frame of the eye image of the eye video sequence 300.
  • the center coordinates of the first spot 301 are determined to be (x 11 , y 11 )
  • the spot center coordinates of the second light spot 302 are ( x12 , y12 )
  • the spot center coordinates of the third light spot 303 are ( x13 , y13 ).
  • x11 is the horizontal coordinate of the first light spot of the first frame eye image
  • x12 is the horizontal coordinate of the second light spot of the first frame eye image
  • x13 is the horizontal coordinate of the third light spot of the first frame eye image
  • y11 is the vertical coordinate of the first light spot of the first frame eye image
  • y12 is the second vertical and horizontal coordinates of the first frame eye image
  • y13 is the vertical coordinate of the third light spot of the first frame eye image.
  • the center of the light spot can be tracked in the second frame of the eye image in the order in which the center coordinates of the light spot are detected in the first frame of the eye image.
  • Statistics show that the area of the light spot formed on the surface of the eyeball is generally within 20 pixels. When the camera frame rate is 30 frames/s, the distance of the light spot movement between frames is within 3 pixels. Therefore, the three center coordinates of the light spots detected in the first frame of the eye image can be used as the geometric center of the corresponding local detection area, and a rectangular local detection area with a range of 6 ⁇ 6 pixels can be selected. Then, the pixel coordinates of the first target position corresponding to each local detection area in the second frame of the eye image are determined using the light spot detection algorithm. It should be understood that for the light spot tracking of each frame of the eye image after the second frame of the eye image, the light spot detection process of the second frame of the eye image can also be referred to.
  • the second frame of the eye image is used as an example for description below.
  • Fig. 3B shows a second schematic diagram of the spot tracking process of the eye video sequence of an exemplary embodiment of the present disclosure.
  • the corresponding first local detection area G1 is searched in the second frame eye image, so that the geometric center coordinates of the first local detection area G1 are ( x11 , y11 ), based on the spot center coordinates of the second spot 302, the corresponding second local detection area G2 is searched in the second frame eye image, so that the geometric center coordinates of the second local detection area G2 are ( x12 , y12 ), and based on the spot center coordinates of the third spot 303, the corresponding third local detection area G3 is searched in the second frame eye image, so that the geometric center coordinates of the third local detection area G3 are ( x13 , y13 ).
  • the spot detection algorithm is used to detect the first local detection area G1 , and the pixel coordinates of the first target position located in the first local detection area G1 are obtained (( x21 , y21 ); the spot detection algorithm is used to detect the second local detection area G2 , and the pixel coordinates of the second target position located in the second local detection area G2 are obtained as ( x22 , y22 ); the spot detection algorithm is used to detect the third local detection area G3 , and the pixel coordinates of the first target position located in the third local detection area G3 are obtained as ( x23 , y23 ).
  • the exemplary embodiment of the present disclosure can use the grayscale centroid method to detect the grayscale centroid coordinates of the local detection area, and define the grayscale centroid of the local detection area as the first target position.
  • the grayscale centroid can be calculated according to the light intensity distribution of the local detection area to improve the accuracy of judging the position with the highest brightness in the local detection area, so as to further ensure the recognition accuracy of subsequent light spot recognition.
  • the grayscale centroid coordinate detection process of the local detection area of the exemplary embodiment of the present disclosure includes:
  • the first step is to calculate the sum of the grayscale values of all pixels in the local detection area G all .
  • FIG4 shows a schematic diagram of the coordinate system of the local detection area of the exemplary embodiment of the present disclosure.
  • the local detection area coordinate system is established with the lower left corner pixel C of the local detection area as the origin, and the origin coordinates are (0, 0).
  • the coordinates of the upper right corner pixel G of the local detection area are (m, n), Therefore, the local detection area includes n rows and m columns of pixels.
  • the sum of the grayscale values of all pixels in the local detection area, G all is: Where I(i,j) is the brightness of the pixel in the i-th row and j-th column in the local detection area.
  • the second step is to calculate the sum of the product of the brightness of each pixel in the local detection area and the square of the X coordinate of the corresponding pixel in the local detection area coordinate system G x , and the sum of the product of the brightness of each pixel and the square of the Y coordinate of the corresponding pixel in the local detection area coordinate system G y .
  • the third step is to calculate the grayscale centroid coordinates (x 2 , y 2 ) of the local detection area.
  • the pixel coordinates of the first target position can be determined based on the grayscale centroid coordinates (x, y) of the local detection area, and then the grayscale value of the first target position, that is, the centroid grayscale value of the local detection area, can be determined using the pixel coordinates of the first target position.
  • the grayscale average of the four second target positions is calculated to obtain the target brightness.
  • the grayscale value of the first target position is subtracted from the target brightness, and the subtraction result is compared with the brightness threshold to determine whether the light spot center constraint is met.
  • the pixel coordinates of the four second target positions are (x-3, y), point (x+3, y), (x, y-3) and (x, y+3).
  • the theoretical area (which can be called the theoretical light spot) of the light spot of the first frame eye image on the second frame eye image has an intersection with the local detection area, that is, at least partially overlaps, for example: the local detection area contains the theoretical light spot, or the local detection area partially overlaps with the theoretical light spot.
  • the local detection area detected in the eye image of the frame to be detected corresponds to the light spot detected in the target frame eye image one by one.
  • the theoretical light spot that partially overlaps the local detection area corresponds to the light spot in the target frame eye image
  • the pixel coordinates of the first target position located in the local detection area can be used to update the pixel coordinates of the corresponding light spot in the light spot sequence array. For example: corresponding to the kth light spot in the target frame eye image, the pixel coordinates corresponding to the kth light spot in the light spot sequence array can be updated to the pixel coordinates of the first target position located in the local detection area. If the difference result is less than the brightness threshold, it can be considered that the local detection area has no intersection with the theoretical light spot, and the local detection area does not contain the real light spot.
  • the position corresponding to the local detection area in the light spot sequence array is occupied by an empty array, so that the disappeared light spot is excluded from the light spot detection of the subsequent frame eye image.
  • the above theoretical light spot can be considered as the area where the light spot of the first frame eye image is tracked on the second frame eye image based on the characteristics of the light spot movement between frames.
  • the spot sequence array Array of the first frame eye image is [(x 11 , y 11 ), (x 12 , y 12 ), (x 13 , y 13 )].
  • (x 11 , y 11 ) in the spot sequence array Array can be updated to (x 21 , y 21 ).
  • an empty array can be used to occupy (x 12 , y 12 ) in the spot sequence array Array.
  • the exemplary embodiment of the present disclosure can determine whether the light spot in the first frame of the eye image disappears in the second frame of the eye image, as well as the serial number of the light spot that disappears in the second frame of the eye image, and the pixel coordinates of the tracked light spot through the latest light spot sequence array. Based on this, when tracking a specified light spot, the exemplary embodiment of the present disclosure can query the latest light spot sequence array based on the position of the specified light spot in the light spot sequence array to determine whether it is tracked in the latest eye image. If the motion trajectory of the light spot is obtained from the eye video sequence, the motion trajectory of the light spot in the eye video sequence can also be obtained by recording the historical information of the light spot sequence array based on the historical information of the light spot sequence array.
  • One or more technical solutions provided in the embodiments of the present disclosure take into account the short distance of light spot movement between frames.
  • a local detection area is obtained from the eye image of the frame to be detected based on the light spot, thereby narrowing the light spot detection range of the eye image of the frame to be detected.
  • the position with the highest brightness in the local detection area can be set as the first target position, and then the light spot recognition result of the eye image of the frame to be detected can be determined based on the pixel brightness judgment result of the first target position, thereby determining whether the light spot is tracked in the eye image of the frame to be detected, so as to improve the accuracy of light spot detection.
  • the method of the exemplary embodiment of the present disclosure combines the inter-frame movement characteristics of the light spot when performing light spot tracking, and performs local light spot detection on the eye image of the frame to be detected while ensuring the accuracy of light spot detection, thereby avoiding the use of a global search method to perform light spot detection on the image.
  • the method of the exemplary embodiment of the present invention can reduce the amount of calculation to 10 ⁇ 3 times compared with the global search method. It can be seen that the method of the exemplary embodiment of the present invention can reduce the amount of calculation and device power consumption while ensuring the accuracy of light spot detection, thereby improving the endurance of mobile devices.
  • the electronic device includes a hardware structure and/or software module corresponding to the execution of each function.
  • the present disclosure can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is executed in the form of hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to exceed the scope of the present disclosure.
  • each functional module can be divided according to each function, or two or more functions can be integrated into one processing module.
  • the above integrated module can be implemented in the form of hardware or software functional modules.
  • the division of modules in the embodiments of the present disclosure is schematic and is only a logical function division. There may be other division methods in actual implementation.
  • the exemplary embodiment of the present disclosure provides a spot tracking device, which can be an electronic device or a chip applied to an electronic device.
  • FIG5 shows a functional schematic block diagram of a spot tracking device according to an exemplary embodiment of the present disclosure. As shown in FIG5, the spot tracking device 500 includes:
  • An extraction module 501 is used to extract a target frame eye image and a to-be-detected frame eye image from an eye image sequence, wherein the target frame eye image contains a light spot, and the acquisition time of the target frame eye image is earlier than the acquisition time of the to-be-detected frame eye image;
  • An acquisition module 502 is used to acquire a local detection area from the eye image of the frame to be detected based on the light spot;
  • the recognition module 503 is used to determine the light spot recognition result of the eye image of the frame to be detected in response to the pixel brightness judgment result of the first target position located in the local detection area, and the first target position is the position with the highest brightness in the local detection area.
  • the geometric center of the local detection area coincides with the center of the light spot.
  • the center of the light spot is the position with the highest brightness in the light spot; or,
  • the light spot center is the geometric center of the light spot, and the size of the local detection area is larger than the size of the light spot.
  • the acquisition module 502 is used to determine the size parameters of the local detection area based on the acquisition frame rate of the image acquisition device, and to determine the local detection area based on the spot center of the target frame eye image and the size parameters of the local detection area.
  • the size of the local detection area is positively correlated with the inter-frame light spot movement distance.
  • the recognition module 503 is further configured to detect the local detection area using a grayscale centroid algorithm to obtain the pixel brightness of the first target position.
  • the pixel brightness of the first target position is a weighted value of the brightness of multiple first target positions
  • the recognition module 503 is further used to detect the local detection area using multiple light spot detection algorithms to obtain the brightness of multiple first target positions, and weight the brightness of multiple first target positions to obtain the brightness weighted value of the first target position.
  • the recognition module 503 is used to determine the light spot recognition result if the pixel brightness of the first target position located in the local detection area satisfies the light spot center constraint condition, including: the local detection area and the light spot to be detected have an intersection, and the light spot to be detected is the light spot of the target frame eye image tracked in the frame eye image to be detected; if the pixel brightness of the first target position located in the local detection area does not satisfy the light spot center constraint condition, determine the light spot recognition result, including: the light spot of the target frame eye image disappears in the frame eye image to be detected.
  • the target frame eye image has a plurality of light spots
  • the recognition module 503 is further used to store the center coordinates of the plurality of light spots of the target frame eye image in a light spot sequence array; if the first target position in the local detection area satisfies the light spot center constraint, the coordinates of the first target position are used to update the light spot center coordinates corresponding to the local detection area in the light spot sequence array; if the first target position in the local detection area does not satisfy the light spot center constraint, an empty array is used to occupy the position corresponding to the local detection area in the light spot sequence array.
  • the light spot center constraint condition includes: the difference between the pixel brightness of the first target position and the target brightness is greater than or equal to a brightness threshold, wherein the target brightness is the brightness determined by the pixel brightness of at least one second target position, and the second target position is located at the edge of the local detection area.
  • the distance between the second target position and the first target position satisfies:
  • d represents the distance between the first target position and the contour position of the local detection area
  • d' represents the distance between the first target position and the second target position
  • d max represents the maximum distance between the first target position and the second target position.
  • FIG6 shows a schematic block diagram of a chip according to an exemplary embodiment of the present disclosure.
  • the chip 600 includes one or more (including two) processors 601 and a communication interface 602.
  • the communication interface 602 can support the electronic device to perform the data transceiving steps in the above method, and the processor 601 can support the electronic device to perform the data processing steps in the above method.
  • the chip 600 further includes a memory 603, which may include a read-only memory and a random access memory, and provides operation instructions and data to the processor.
  • a portion of the memory may also include a non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the processor 601 performs corresponding operations by calling operation instructions stored in the memory (the operation instructions may be stored in the operating system).
  • the processor 601 controls the processing operations of any one of the terminal devices, and the processor may also be referred to as a central processing unit (CPU).
  • the memory 603 may include a read-only memory and a random access memory, and provides instructions and data to the processor 601.
  • a portion of the memory 603 may also include NVRAM.
  • the memory, the communication interface, and the memory are coupled together through a bus system, wherein the bus system may include a power bus, a control bus, and a status signal bus in addition to a data bus.
  • various buses are labeled as bus system 604 in FIG6 .
  • the method disclosed in the above-mentioned embodiment of the present disclosure can be applied to a processor or implemented by a processor.
  • the processor may be an integrated circuit chip with signal processing capabilities.
  • each step of the above-mentioned method can be completed by an integrated logic circuit of hardware in the processor or instructions in the form of software.
  • the above-mentioned processor can be a general-purpose processor, a digital signal processor (digital signal processing, DSP), an ASIC, a field-programmable gate array (field-programmable gate array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processing
  • ASIC application-programmable gate array
  • FPGA field-programmable gate array
  • the methods, steps and logic diagrams disclosed in the embodiments of the present disclosure can be implemented or executed.
  • the general processor can be a microprocessor or the processor can also be any conventional processor, etc.
  • the steps of the method disclosed in the embodiments of the present disclosure can be directly embodied as a hardware decoding processor for execution, or a combination of hardware and software modules in the decoding processor for execution.
  • the software module can be located in a mature storage medium in the field such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory or an electrically erasable programmable memory, a register, etc.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the exemplary embodiment of the present disclosure also provides an electronic device, comprising: at least one processor; and a memory connected to the at least one processor in communication.
  • the memory stores a computer program that can be executed by the at least one processor, and the computer program is used to cause the electronic device to perform the method according to the embodiment of the present disclosure when executed by the at least one processor.
  • the exemplary embodiments of the present disclosure also provide a non-transitory computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor of a computer, is used to cause the computer to execute the method according to the embodiments of the present disclosure.
  • the exemplary embodiments of the present disclosure further provide a computer program product, including a computer program, wherein when the computer program is executed by a processor of a computer, the computer is used to enable the computer to perform the method according to the embodiments of the present disclosure.
  • FIG. 7 a block diagram of an electronic device 700 that can be used as a server or client of the present disclosure will now be described, which is an example of a hardware device that can be applied to various aspects of the present disclosure.
  • the electronic device is intended to represent various forms of digital electronic computer equipment, such as laptop computers, desktop computers, workbenches, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
  • the electronic device can also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions are merely examples, and are not intended to limit the implementation of the present disclosure described and/or required herein.
  • the electronic device 700 includes a computing unit 701, which can perform various appropriate actions and processes according to a computer program stored in a read-only memory (ROM) 702 or a computer program loaded from a storage unit 708 into a random access memory (RAM) 703.
  • ROM read-only memory
  • RAM random access memory
  • various programs and data required for the operation of the device 700 can also be stored.
  • the computing unit 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704.
  • An input/output (I/O) interface 705 is also connected to the bus 704.
  • a plurality of components in the electronic device 700 are connected to the I/O interface 705, including: an input unit 706, an output unit 707, a storage unit 708, and a communication unit 709.
  • the input unit 706 may be any type of device capable of inputting information to the electronic device 700, and the input unit 706 may receive input digital or character information, and generate key signal inputs related to user settings and/or function control of the electronic device.
  • the output unit 707 may be any type of device capable of presenting information, and may include but is not limited to a display, a speaker, a video/audio output terminal, a vibrator, and/or Printer.
  • Storage unit 708 may include, but is not limited to, a magnetic disk, an optical disk.
  • Communication unit 709 allows electronic device 700 to exchange information/data with other devices via a computer network such as the Internet and/or various telecommunication networks, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication transceiver and/or a chipset, such as a BluetoothTM device, a WiFi device, a WiMax device, a cellular communication device, and/or the like.
  • the computing unit 701 may be various general and/or special processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), various dedicated artificial intelligence (AI) computing chips, various computing units running machine learning model algorithms, digital signal processors (DSPs), and any appropriate processors, controllers, microcontrollers, etc.
  • the computing unit 701 performs the various methods and processes described above.
  • the method of the exemplary embodiments of the present disclosure may be implemented as a computer software program, which is tangibly contained in a machine-readable medium, such as a storage unit 708.
  • part or all of the computer program may be loaded and/or installed on the electronic device 700 via the ROM 702 and/or the communication unit 709.
  • the computing unit 701 may be configured to execute the method in any other appropriate manner (e.g., by means of firmware).
  • the program code for implementing the method of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, a special-purpose computer, or other programmable data processing device, so that the program code, when executed by the processor or controller, implements the functions/operations specified in the flow chart and/or block diagram.
  • the program code may be executed entirely on the machine, partially on the machine, partially on the machine and partially on a remote machine as a stand-alone software package, or entirely on a remote machine or server.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, device, or equipment.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or equipment, or any suitable combination of the foregoing.
  • a more specific example of a machine-readable storage medium may include an electrical connection based on one or more lines, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM portable compact disk read-only memory
  • CD-ROM compact disk read-only memory
  • magnetic storage device or any suitable combination of the foregoing.
  • machine-readable medium and “computer-readable medium” refer to any computer program product, apparatus, and/or device (e.g., disk, optical disk, memory, programmable logic device (PLD)) for providing machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal.
  • machine-readable signal refers to any signal for providing machine instructions and/or data to a programmable processor.
  • a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user;
  • a computer may include a computer that provides a user with a computer program; a computer that provides a user with a computer program; and a keyboard and a pointing device (e.g., a mouse or trackball) through which a user can provide input to the computer.
  • the feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form (including acoustic input, voice input, or tactile input).
  • feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form (including acoustic input, voice input, or tactile input).
  • the systems and techniques described herein may be implemented in a computing system that includes a backend component (e.g., as a data server), or a computing system that includes a middleware component (e.g., an application server), or a computing system that includes a frontend component (e.g., a user computer with a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described herein), or a computing system that includes any combination of such backend components, middleware components, or frontend components.
  • the components of the system may be interconnected by digital data communication (e.g., a communication network) in any form or medium. Examples of communication networks include: a local area network (LAN), a wide area network (WAN), and the Internet.
  • a computer system may include clients and servers. Clients and servers are generally remote from each other and typically interact through a communication network. The relationship of client and server arises through computer programs that run on respective computers and have a client-server relationship to each other.
  • the computer program product includes one or more computer programs or instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, a terminal, a display terminal or other programmable device.
  • the computer program or instruction may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer program or instruction may be transmitted from one website, computer, server or data center to another website, computer, server or data center by wired or wireless means.
  • the computer-readable storage medium may be any available medium that a computer can access or a data storage device such as a server or data center that integrates one or more available media.
  • the available medium may be a magnetic medium, such as a floppy disk, a hard disk, or a tape; it may also be an optical medium, such as a digital video disc (DVD); it may also be a semiconductor medium, such as a solid state drive (SSD).

Landscapes

  • Image Analysis (AREA)

Abstract

本公开提供一种光斑追踪方法、装置、电子设备及存储介质,所述方法包括:从眼部图像序列提取目标帧眼部图像和待检测帧眼部图像,目标帧眼部图像含有光斑,目标帧眼部图像的采集时间先于待检测帧眼部图像的采集时间,基于光斑从待检测帧眼部图像获取局部检测区,响应于位于局部检测区中第一目标位置的像素亮度判断结果,确定待检测帧眼部图像的光斑识别结果,第一目标位置为局部检测区内亮度最高的位置。

Description

一种光斑追踪方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请以申请号为202211608969.8,申请日为2022年12月14日的中国专利申请为基础,并主张其优先权,该中国专利申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及图像处理技术领域,尤其涉及一种光斑追踪方法、装置、电子设备及存储介质。
背景技术
随着虚拟现实技术的发展,精确的瞳距是虚拟现实(Virtual Reality,VR)设备的重要参数,合适的瞳距不仅可以让用户体验到清晰的图像画面,还可以有身临其境的立体视觉深度感知的效果。
在进行瞳距测量时,需要精确检测虹膜上的光斑位置,然后利用光斑位置确定瞳距参数,以使得VR设备基于瞳距参数调节两个透镜的间距,从而保证用户具有良好的视觉体验。但是,在进行光斑检测时,需要采集用户眼部图像序列,并对眼部图像序列进行全局搜索,存在计算量大、功耗高的问题。
发明内容
根据本公开的一方面,提供了一种光斑追踪方法,所述方法包括:
从眼部图像序列提取目标帧眼部图像和待检测帧眼部图像,所述目标帧眼部图像含有光斑,所述目标帧眼部图像的采集时间先于所述待检测帧眼部图像的采集时间;
基于所述光斑从所述待检测帧眼部图像获取局部检测区;
响应于位于所述局部检测区中第一目标位置的像素亮度判断结果,确定待检测帧眼部图像的光斑识别结果,所述第一目标位置为所述局部检测区内亮度最高的位置。
根据本公开的另一方面,提供了一种光斑追踪装置,所述装置包括:
提取模块,用于从眼部图像序列提取目标帧眼部图像和待检测帧眼部图像,所述目标帧眼部图像含有光斑,所述目标帧眼部图像的采集时间先于所述待检测帧眼部图像的采集时间;
获取模块,用于基于所述光斑从所述待检测帧眼部图像获取局部检测区;
识别模块,用于响应于位于所述局部检测区中第一目标位置的像素亮度判断结果,确定待检测帧眼部图像的光斑识别结果,所述第一目标位置为所述局部检测区内亮度最高的位置。
根据本公开的另一方面,提供了一种电子设备,包括:
至少一个处理器;以及,
存储程序的存储器;
其中,所述程序包括指令,所述指令在由所述处理器执行时使所述处理器执行根据本公开示例性实施例所述的方法。
根据本公开的另一方面,提供了一种非瞬时计算机可读存储介质,所述非瞬时计算机可读存储介质存储有计算机指令的所述计算机指令用于使所述计算机执行根据本公开是联系实施例所述的方法。
附图说明
在下面结合附图对于示例性实施例的描述中,本公开的更多细节、特征和优点被公开,在附图中:
图1示出了根据本公开示例性实施例的光斑追踪方法的流程图;
图2A示出了本公开示例性实施例的一种光斑识别判断原理示意图;
图2B示出了本公开示例性实施例的另一种光斑识别原理示意图;
图3A示出了本公开示例性实施例的眼部视频序列的光斑追踪过程示意图一;
图3B示出了本公开示例性实施例的眼部视频序列的光斑追踪过程示意图二;
图4示出了本公开示例性实施例的局部检测区的坐标系示意图;
图5示出了根据本公开示例性实施例的光斑追踪装置的功能示意性框图;
图6示出了根据本公开示例性实施例的芯片的示意性框图;
图7示出了能够用于实现本公开的实施例的示例性电子设备的结构框图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功 能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
对于显示设备来说,例如虚拟现实(Virtual Reality,VR)眼镜,设备需要针对不同的用户对瞳距进行调整,合适的瞳距才能获得合适的物距和像距,而合适的物距和像距才能让用户看清显示设备屏幕上的成像。因此,为了获得精确的瞳距,需要精确地检测并定位虹膜上光斑的位置,光斑位置对于确定瞳距参数至关重要。
在相关技术中,可以通过像素亮度检索、整幅图梯度、图像热力图等任意方式在整幅眼部图像检测光斑的位置。可见,在进行光斑检测时,需要对眼部图像进行全局搜索,这种操作导致设备计算量大,且功耗高的问题,尤其对于移动式设备来说,功耗高意味着电池电量消耗快,不利于续航,并且导致移动式设备容易发热的问题。
针对上述问题,本公开示例性实施例提供一种光斑追踪方法,利用光斑的帧间移动特性,在保证光斑检测准确率的情况下,对待检测帧眼部图像进行局部光斑检测,从而避免在图像中进行光斑的全局搜索,进而降低计算量和设备功耗,提高移动式设备的续航能力。
本公开实施例中提供的一个或多个技术方案,考虑到帧间光斑移动距离短的情况,基于目标帧眼部图像含有的光斑从待检测帧眼部图像获取局部检测区,从而缩小目标帧眼部图像的光斑检测范围。在此基础上,可以将局部检测区内亮度最高的位置设定为第一目标位置,然后基于第一目标位置的像素亮度判断结果确定出待检测帧眼部图像的光斑识别结果,从而确定是否在待检测帧眼部图像中追踪到光斑。可见,本公开示例性实施例的方法在进行光斑追踪时,结合光斑的帧间移动特性,在保证光斑检测准确率的情况下,对待检测帧眼部图像进行局部光斑检测,从而避免采用全局搜索方式对图像进行光斑检测。基于此,本公开示例性实施例的方法可以在保证光斑检测准确率的情况下,降低计算量和设备功耗,提高移动式设备的续航能力。
本公开示例性实施例的方法可以由电子设备或电子设备的芯片执行,该电子设备具有相机、摄像机或者带有图像采集功能。该电子设备可以为智能手机(如Android手机、iOS手机)、可穿戴设备、AR(增强现实)\VR(虚拟现实)设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、平板电脑、掌上电脑以及移动互联网设备(mobile internet devices,MID)等使用闪存存储器件的设备等。该电子设备安装有摄像头,该摄像头可以为单目摄像头、双目摄像头等,其所采集的图像可以为灰度图、彩色图、热成像图等,但不仅限于此。
图1示出了根据本公开示例性实施例的光斑追踪方法的流程图。如图1所示,本公开示例性实施例的光斑追踪方法可以包括:
步骤101:从眼部图像序列提取目标帧眼部图像和待检测帧眼部图像,该目标帧眼部图像含有光斑,目标帧眼部图像的采集时间先于待检测帧眼部图像的采集时间。此处光斑的数量可以为一个,也可以为多个。
本公开示例性实施例的目标帧眼部图像可以是眼部图像序列的第一帧眼部图像,也可以为其它帧眼部图像。目标帧眼部图像和待检测帧眼部图像可以为相邻帧眼部图像,也可以为间隔帧眼部图像。
当目标帧眼部图像和待检测帧眼部图像为相邻帧眼部图像,目标帧眼部图像可以为第k帧眼部图像,相邻帧为第k+1帧眼部图像,k为大于或等于1的整数。当目标帧眼部图像和待检测帧眼部图像为间隔帧眼部图像,目标帧眼部图像可以为第k帧眼部图像,待检测帧眼部图像为第k+r眼部图像,k为大于或等于1的整数,r为大于或等于2的整数。例如:r=2~4。此时,可以保证目标帧图像和待检测帧图像为眼球沿着单一方向运动时所采集到的眼部图像序列中的两帧,以方便按照帧间移动特点确定局部检测区在待检测帧眼部图像的区域,保证确定的局部检测区域准确,提高光斑识别成功率。
在实际应用中,本公开示例性实施例的眼部图像序列可以包括多帧眼部图像的图像序列,例如可以是通过图像采集设备采集的包括目标对象的眼部的视频文件。本公开示例性实施例的眼部图像可以是人类的眼部图像也可以为动物图像,并且眼部图像序列可以只含有眼睛、包含眼睛的面部图像或者包含眼睛的全身图像等。
示例性的,可以对目标帧眼部图像进行灰度化,然后对目标帧眼部图像进行光斑检测,光斑检测的方式可以为像素亮度检索、整幅图梯度、图像热力图等任意方式,本公开对此不作限定。
步骤102:基于光斑从待检测帧眼部图像获取局部检测区。此时,在目标帧眼部图像检测到多个光斑时,可以获取多个局部检测区,以使得每个光斑均可以在待检测帧眼部图像中获取到对应的局部检测区。可见,目标帧眼部图像的光斑与待检测帧眼部图像的局部检测区一一对应,即目标帧眼部图像的光斑数量与待检测帧眼部图像的局部检测区数量相同。
在本公开的一些示例性实施例中,局部检测区的几何中心可以与目标帧眼部图像对应的光斑中心重合。其中,光斑中心可以为光斑的几何中心,也可以为光斑中亮度最高的位置。
当局部检测区的几何中心与光斑的几何中心重合时,可以按照光斑的几何中心坐标在待检测帧眼部图像上确定局部检测区的几何中心,使得局部检测区的几何中心坐标与光斑的几何中心坐标重合,然后基于光斑的尺寸确定局部检测区的尺寸,以使得局部检测区的尺寸大于光斑的尺寸。换句话说,在进行光斑检测时,在目标帧眼部图像检测到光斑后,可以 以光斑的几何中心为中心,基于光斑在帧间移动特点将光斑所覆盖的区域扩大,从而获得局部检测区的尺寸。需要说明的是,本公开实施例中局部检测区的尺寸大于光斑的尺寸可以为:由光斑的边缘限定的区域位于对应的局部检测区的边缘限定的区域内,并且由光斑的边缘限定的区域面积小于对应的局部检测区的边缘限定的区域面积。在一些示例中,光斑和局部检测区可以均为圆形,此时局部检测区的径向尺寸可以比光斑的径向尺寸大1个像素到3个像素。
当局部检测区的几何中心与光斑中亮度最高的位置重合时,示例性的,可以确定目标帧眼部图像包括的光斑中亮度最高的位置(定义为亮度中心),然后将根据亮度中心的坐标在待检测帧眼部图像确定对应的局部检测区的几何中心,使得局部检测区的几何中心的坐标与亮度中心的坐标重合,然后基于光斑在帧间移动特点,设置局部检测区域的尺寸,从而确定局部检测区域在待检测帧眼部图像中占据的区域。
步骤103:响应于位于局部检测区中第一目标位置的像素亮度判断结果,确定待检测帧眼部图像的光斑识别结果,第一目标位置为局部检测区内亮度最高的位置。
在实际应用中,本公开示例性实施例可以判断位于局部检测区中的第一目标位置的像素亮度是否满足光斑中心约束条件,从而获得像素亮度判断结果。若位于局部检测区中的第一目标位置的像素亮度满足光斑中心约束条件,可以确定光斑识别结果包括:局部检测区与待检测光斑具有交集,其中,待检测光斑为所述目标帧眼部图像的光斑在待检测帧眼部图像中被追踪到的光斑。若位于局部检测区中的第一目标位置的像素亮度不满足光斑中心约束条件,说明该局部检测区不包含真实的光斑,因此,可以确定光斑识别结果包括:目标帧眼部图像的光斑在待检测帧眼部图像消失。需要说明的是,第一目标位置可以对应局部检测区中的一个像素,也可以对应局部检测区中的多个像素,与实际采用的第一目标位置的确定方法相关。例如,在本公开一些实施例中,可以将局部检测区的灰度质心作为第一目标位置,此时第一目标位置对应局部检测区中位于灰度质心坐标位置的一个像素。
当目标帧眼部图像的光斑数量为多个时,此处的局部检测区与目标帧眼部图像的光斑对应,可以基于目标帧眼部图像中的多个光斑从待检测帧眼部图像中获取对应的多个局部检测区,其中,局部检测区表征了目标帧眼部图像的对应光斑在待检测帧眼部图像中可能的位置。
当目标帧眼部图像的光斑数量为多个时,本公开示例性实施例的方法还包括:将目标帧眼部图像的多个光斑中心坐标保存在光斑序列数组中,若位于所述局部检测区中的所述第一目标位置满足光斑中心约束条件,使用第一目标位置的坐标更新所述光斑序列数组中所述局部检测区对应的光斑中心坐标;若位于局部检测区中的所述第一目标位置不满足光斑中心约束条件,在光斑序列数组中与该局部检测区对应的位置使用空数组占位,以实现光斑 序列数组的更新。
当采用上述方式更新目标帧眼部图像的光斑序列数组后,可以保证待检测帧眼部图像的光斑序列数组的光斑次序与目标帧眼部图像的光斑序列数组的光斑次序相同。也就是说,当目标帧眼部图像的光斑序列数组的各位置的光斑中心坐标确定后,如果基于该光斑中心坐标从待检测帧眼部图像获取的局部检测区包含真实的光斑,那么可以将光斑序列数组中该光斑中心坐标替换为局部检测区的第一目标位置的坐标,如果基于该光斑中心坐标从待检测帧眼部图像获取的局部检测区不包含真实的光斑,那么可以将目标帧眼部图像的光斑序列数组中该光斑中心坐标通过空数组替代。
可见,当待检测帧眼部图像的光斑序列数组的光斑次序与目标帧眼部图像的光斑序列数组的光斑次序相同,待检测帧眼部图像的光斑序列数组实质反映了目标帧眼部图像的光斑追踪结果,因此,可以根据待检测帧眼部图像对应的光斑序列数组中各个位置是否为空数组,确定是否在待检测帧眼部图像追踪到对应的光斑,从而达到对指定光斑进行追踪的目的。例如:如果待检测帧眼部图像的光斑序列数组中的某个位置的数组为空数组,则说明目标帧眼部图像的光斑序列数组相同位置对应的光斑在待检测帧眼部图像没有被追踪到。
在一种可选方式中,上述眼部图像序列由图像采集设备采集。本公开示例性实施例的基于光斑从待检测帧眼部图像获取局部检测区可以包括:基于图像采集设备的采集帧率确定局部检测区的尺寸参数,基于光斑中心和局部检测区的尺寸参数确定局部检测区。应理解,单位像素尺寸由图像采集设备的感光参数确定。
当基于图像采集设备的采集帧率确定局部检测区的尺寸参数时,可以采用统计的方式确定帧间光斑移动距离,然后基于帧间光斑移动距离和眼部图像序列中的眼部图像的单位像素尺寸确定局部检测区的尺寸参数。应理解,可以定义目标帧眼部图像和待检测帧眼部图像为帧间,而帧间光斑移动距离可以是指光斑从目标帧眼部图像开始到待检测帧眼部图像开始的移动距离。
示例性的,考虑到采集帧率越大时,相邻帧眼部图像的采集间隔时长越短,使得帧间光斑移动距离也就越小。相反,当采集帧率越小时,相邻帧眼部图像的采集间隔时长越长,使得帧间光斑移动距离也就越大。可见,局部检测区的尺寸与眼部图像序列的帧间光斑移动距离正相关。
示例性的,本公开示例性实施例的局部检测区的尺寸参数不仅可以基于图像采集设备的采集帧率确定帧间光斑移动距离,还可以同时结合局部检测区的几何形状和图像采集设备的采集帧率,确定局部检测区的尺寸参数或者说在待检测帧眼部图像的覆盖范围。举例来说,当局部检测区为近似圆形时,尺寸参数可以包括径向尺寸,如半径或者直径。当局部检测区为矩形时,尺寸参数包括对角线参数,例如对角线长度或者对角顶点坐标。
示例性的,在利用图像采集设备采集眼部图像序列时,可以每毫秒便采集一帧眼部图像,使得从目标帧眼部图像到待检测帧眼部图像的光斑中心移动距离比较短。可以将目标帧眼部图像的光斑中心设置为局部检测区的几何中心,在此基础上,基于局部检测区的几何形状和图像采集设备的采集帧率(或者帧间光斑移动距离),可以准确定位局部检测区的位置和覆盖面积。这种情况下,该局部检测区属于待检测帧眼部图像的局部区域,从而在对待检测帧眼部图像进行光斑检测时不需要针对待检测帧眼部图像的所有像素进行检测,减少了光斑检测的计算量和设备功耗。
示例性的,以像素为单元时,可以采用帧间光斑移动距离和单位像素尺寸的比值确定局部检测区的几何中心到局部检测区的边缘的像素数量,若帧间光斑移动距离和单位像素尺寸的比值为小数时,可以对帧间光斑移动距离和单位像素尺寸的比值进行向上取整,使得局部检测区的范围比目标帧眼部图像中对应的光斑范围略大,从而避免在后续确认局部检测区是否包含真正光斑时出现误差。
当目标帧眼部图像和待检测帧眼部图像为相邻帧眼部图像时,局部检测区的尺寸参数大于或等于N,N表示局部检测区的几何中心到局部检测区的边缘的最少像素数量,d0表示帧间光斑移动距离,dpix表示眼部图像序列的单位像素尺寸,表示向上取整符号。可见,局部检测区的尺寸与眼部图像序列含有的每帧眼部图像的单位像素尺寸负相关,局部检测区的尺寸与帧间光斑移动距离正相关。
在一种可选方式中,本公开示例性实施例的光斑中心约束条件包括:第一目标位置的亮度与目标亮度的差值大于或等于亮度阈值,其中该目标亮度由至少一个第二目标位置的亮度确定,该第二目标位置位于局部检测区的边缘。也就是说,当第一目标位置为局部检测区内亮度最高的位置时,可以以亮度阈值作为依据,确定局部检测区的亮度是否明显高于局部检测区周边区域的亮度。应理解,亮度阈值可以根据实际情况设置。
示例性的,如果第一目标位置的像素亮度与目标亮度的差值大于或等于亮度阈值,说明局部检测区的亮度特别高,已经明显高于局部检测区周边区域的亮度,因此,该局部检测区对应的目标帧眼部图像的光斑在待检测帧眼部图像上被追踪到;如果第一目标位置与目标亮度的差值小于亮度阈值,说明局部检测区的亮度比较低,已经明显低于局部检测区周边区域的亮度,其不是真实的光斑,该局部检测区对应的目标帧眼部图像的光斑在待检测帧眼部图像上消失。
示例性的,上述第二目标位置与第一目标位置之间的距离可以满足:d≤d’≤dmax,d表示第一目标位置与局部检测区的轮廓位置之间的距离,d’表示第一目标位置与第二目标位置之间的距离,dmax表示第一目标位置与第二目标位置之间的最大距离。可见,第二目标位置实 质为局部检测区的轮廓线上的某个位置或离局部检测区的轮廓线比较近的非局部检测区的某个位置。应理解,若第一目标位置与第二目标位置之间的最大距离大于d时,可以设定dmax-d=Δd,Δd可以根据实际情况设计。
当第二目标位置可以为多个,可以基于多个第二目标位置的像素亮度确定目标亮度,例如:目标亮度可以为基于多个第二目标位置的像素亮度平均值。例如,可以将多个第二目标位置的像素亮度平均值设定为目标亮度,然后判断第一目标位置的亮度是否大于目标亮度,以确定第一目标位置的像素亮度是否满足光斑中心约束条件,从而避免非局部检测区的局部像素过亮所导致的检测误差。
在实际应用中,待检测帧眼部图像中的第二目标位置的像素可以位于第一目标位置的像素各个方向,例如:上方、下方、左侧及右侧,当然也可是其它方向。局部检测区的形状可以为圆形、椭圆形、矩形、不规则形状等。下面以圆形为例进行说明。
当第二目标位置的数量为一个时,图2A示出了本公开示例性实施例的一种光斑识别判断原理示意图。如图2A所示,局部检测区200为矩形的局部检测区,第一目标位置的像素用O表示,第二目标位置的像素用P1表示。
如图2A所示,假设第一目标位置的像素O与局部检测区的几何中心所在像素重合,第二目标位置的像素P1位于局部检测区300的边缘线所经过的像素(参考图3A),也可以位于局部检测区300以外的非局部检测区(未示出),且与局部检测区300的轮廓线所在像素相邻(未示出)或不相邻(未示出)。
如图2A所示,在第二目标位置的像素数量为一个的情况下,目标亮度为第一目标位置对应的像素O的亮度,可以计算像素O的亮度和像素P1的亮度差值是否大于或等于亮度阈值,如果大于或等于亮度阈值,则可以判断出第一目标位置的像素亮度满足光斑中心约束条件,否则不满足光斑中心约束条件。
当第二目标位置的数量为多个时,图2B示出了本公开示例性实施例的另一种光斑识别原理示意图。如2B所示,局部检测区200为矩形的局部检测区,第一目标位置的像素用O表示,第二目标位置的数量为四个,分别为位于第一目标位置的像素O上方的上部像素P1、位于第一目标位置的像素O下方的下部像素P2、位于第一目标位置的像素O左侧的左侧像素P3和位于第一目标位置的像素O右侧的右侧像素P4。应理解,上部像素P1、下部像素P2、左侧像素P3和右侧像素P4可以位于局部检测区200的边缘线所经过的像素,也可以位于局部检测区300以外的非局部检测区(未示出),且与局部检测区300的轮廓线所在像素相邻(未示出)或不相邻(未示出)。
如图2B所示,在第二目标位置的像素数量为四个的情况下,可以计算上部像素P1的像素亮度、下部像素P2的像素亮度、左侧像素P3的像素亮度和右侧像素P4的像素亮度的平 均值,将该平均值作为目标亮度,计算像素O的亮度和目标亮度的差值是否大于或等于亮度阈值,如果大于或等于亮度阈值,则可以判断出第一目标位置的像素亮度满足光斑中心约束条件,否则不满足光斑中心约束条件。
采用图2B所示的原理进行光斑识别时,如果上部像素P1、下部像素P2、左侧像素P3和右侧像素P4中的一个亮度异常高,可以通过对其它三个进行求平均,从而综合多个方向的像素亮度,进而准确判断局部检测区200是否为真实的光斑。
在一种可选方式中,本公开示例性实施例可以基于一种或多种光斑检测算法检测局部检测区,获得第一目标位置的像素亮度。光斑检测算法可以为像素亮度检索、整幅图梯度、图像热力图等任意方式。除此之外,还采用灰度质心法、霍夫圆检测法、圆拟合法等方法进行光斑检测,但不限于此。在光斑检测时,其所测得的亮度往往以灰阶值的形式表达,也可以采用其他表示方式,但无论如何,其都应当可以反映亮度。
在一些实施例中,可以是采用一种光斑检测算法对局部检测区,获得第一目标位置的像素亮度。此时,可以将第一目标位置的像素亮度与目标亮度进行求差,然后将求差结果与亮度阈值进行比较,从而进行光斑识别。
在另一种可选方式中,本公开示例性实施例第一目标位置的像素亮度可以为多个第一目标位置的像素亮度加权值。例如:可以采用多种光斑检测算法检测局部检测区,获得多个第一目标位置的像素亮度,然后将多个第一目标位置的像素亮度进行加权,获得第一目标位置的亮度加权值。此时,可以将第一目标位置的亮度加权值与目标亮度进行求差,然后将求差结果与亮度阈值进行比较,从而进行光斑识别。
另外,可以基于历史数据训练权值预测模型,用以预测各种光斑检测算法相应的权值,也可以根据实际经验设置各种光斑检测算法相应的权重,从而减少最终计算值的误差,以更准确判断追踪到的光斑是否为真实的光斑。其中,权值预测模型可以为一个也可以为多个,一个权值预测模型可以预测一种或多种光斑检测算法相应的权重。
为了方便理解本公开示例性实施例的方法,下面举例描述本公开实施例的实现过程,应理解,以下过程用于解释,并不作为限制。
利用分辨率为400*400,帧率为30帧/s的相机采集需要光斑检测的眼部图像序列,对眼部图像序列的第一帧眼部图像(也就是目标眼部图像)进行光斑检测,获得各个光斑中心坐标,这些光斑中心坐标可以认为是各个光斑在眼部图像序列的起始坐标,并将各个光斑中心的坐标保存为光斑序列数组。
图3A示出了本公开示例性实施例的眼部视频序列的光斑追踪过程示意图一。如图3A所示,在眼部视频序列300的第一帧眼部图像的眼球表面上存在第一光斑301、第二光斑302、第三光斑303,经过光斑检测之后,确定第一光斑301的光斑中心坐标为(x11,y11)、 第二光斑302的光斑中心坐标为(x12,y12)、第三光斑303的斑中心坐标为(x13,y13),将第一光斑301光斑中心坐标、第二光斑302光斑中心坐标、第三光斑303光斑中心坐标保存在光斑序列数组Array=[(x11,y11),(x12,y12),(x13,y13)]。其中,x11为第一帧眼部图像的第一光斑横坐标,x12为第一帧眼部图像的第二光斑横坐标,x13为第一帧眼部图像的第三光斑横坐标,y11为第一帧眼部图像的第一光斑纵坐标,y12为第一帧眼部图像的第二光纵横坐标,y13为第一帧眼部图像的第三光斑纵坐标。
在实际应用中,可以按照在第一帧眼部图像检测到光斑中心坐标的顺序在第二帧眼部图像中追踪光斑中心。例如:统计发现,眼球表面形成的光斑面积一般在20个像素以内,在相机帧率为30帧/s的情况下,帧间光斑移动距离在3个像素以内,因此,可以以第一帧眼部图像检测到的三个光斑中心坐标为对应的局部检测区的几何中心,选择6×6像素范围的呈矩形的局部检测区,然后利用光斑检测算法确定第二帧眼部图像中每个局部检测区对应的第一目标位置的像素坐标。应理解,对于第二帧眼部图像之后的各帧眼部图像的光斑追踪,也可以参考第二帧眼部图像的光斑检测过程。下面以第二帧眼部图像为例进行描述。
图3B示出了本公开示例性实施例的眼部视频序列的光斑追踪过程示意图二。如图3B所示,基于第一光斑301的光斑中心坐标在第二帧眼部图像中查找对应的第一局部检测区G1,使得第一局部检测区G1的几何中心坐标为(x11,y11),基于第二光斑302的光斑中心坐标在第二帧眼部图像中查找对应的第二局部检测区G2,使得第二局部检测区G2的几何中心坐标为(x12,y12),基于第三光斑303的光斑中心坐标在第二帧眼部图像中查找对应的第三局部检测区G3,使得第三局部检测区G3的几何中心坐标为(x13,y13)。
对于眼部图像序列300的第二帧眼部图像,采用光斑检测算法检测第一局部检测区G1,获得位于第一局部检测区G1的第一目标位置的像素坐标((x21,y21),采用光斑检测算法检测检测第二局部检测区G2,获得位于第二局部检测区G2的第二目标位置的像素坐标为(x22,y22),采用光斑检测算法检测检测第三局部检测区G3,获得位于第三局部检测区G3的第一目标位置的像素坐标(x23,y23)。
本公开示例性实施例可以采用灰度质心法检测局部检测区的灰度质心坐标,将局部检测区的灰度质心定义为第一目标位置。在这个过程中,灰度质心可以按局部检测区光强分布求出灰度质心坐标,从而提高局部检测区中亮度最高位置的判断准确性,以进一步保证后续光斑识别的识别精度。
示例性的,本公开示例性实施例局部检测区的灰度质心坐标检测过程包括:
第一步,计算局部检测区内所有像素的灰度值之和Gall。图4示出了本公开示例性实施例的局部检测区的坐标系示意图。其中,以局部检测区的左下角像素C为原点,建立局部检测区坐标系,原点坐标为(0,0)。该局部检测区的右上角像素G的坐标为(m,n), 因此,该局部检测区包括n行m列像素。基于此,局部检测区内所有像素的灰度值之和Gall为:其中,I(i,j)为局部检测区内第i行第j列像素亮度。
第二步,计算局部检测区内每个像素亮度与对应像素在局部检测区坐标系的X坐标平方的乘积之和Gx,每个像素亮度与对应像素在局部检测区坐标系的Y坐标平方的乘积之和Gy
第三步,计算局部检测区的灰度质心坐标(x2,y2)。
为了保证所获得的局部检测区为追踪到的光斑,可以判断局部检测区的质心坐标是否为真正的光斑。例如:可以基于局部检测区的灰度质心坐标(x,y)确定第一目标位置的像素坐标,然后利用第一目标位置的像素坐标可以确定第一目标位置的灰度值,也就是局部检测区的质心灰度值,同时计算四个第二目标位置的灰度平均值,获得目标亮度。在此基础上,对第一目标位置的灰度值与目标亮度进行求差,将求差结果与亮度阈值进行比较,从而确定是否满足光斑中心约束条件。其中,当局部检测区为前述的6×6像素范围的矩形区域时,四个第二目标位置的像素坐标分别为(x-3,y)、点(x+3,y)、(x,y-3)和(x,y+3)。
如果求差结果大于或等于亮度阈值,可以认为第一帧眼部图像的光斑在第二帧眼部图像上的理论区域(可以称为理论光斑)与局部检测区具有交集,也就是至少部分重合,例如:局部检测区包含理论光斑,或者局部检测区域与理论光斑部分重叠。另外,在待检测帧眼部图像检测到的局部检测区与目标帧眼部图像中检测到的光斑一一对应,因此,当局部检测区与理论光斑具有交集时,该局部检测区部分重合的理论光斑与目标帧眼部图像中的光斑对应,可以使用位于该局部检测区内的第一目标位置的像素坐标更新对应光斑在光斑序列数组中的像素坐标。例如:对应于目标帧眼部图像中的第k个光斑,此时可将光斑序列数组中对应于第k个光斑的像素坐标更新为位于局部检测区的第一目标位置的像素坐标。如果求差结果小于亮度阈值,可以认为该局部检测区与理论光斑没有交集,该局部检测区不包含真正的光斑,那么在光斑序列数组中与该局部检测区对应的位置,利用空数组进行占位,从而将已经消失的光斑排除在后续帧眼部图像的光斑检测中。应理解,上述理论光斑可以认为是基于帧间光斑移动特点,第一帧眼部图像的光斑在第二帧眼部图像上被追踪到的区域。
以图3A和图3B为例,在图3A中,第一帧眼部图像的光斑序列数组Array=[(x11,y11),(x12,y12),(x13,y13)]。假设第一局部检测区与第一光斑对应的理论光斑具有交集,此时可以将光斑序列数组Array中的(x11,y11)更新为(x21,y21),假设第二局部检测区与第二光斑对应的理论光斑没有交集,此时可以使用空数组对光斑序列数组Array中的(x12,y12)进行占 位,假设第三局部检测区与第三光斑对应的理论光斑具有交集,此时可以将光斑序列数组Array中的(x11,y11)更新为(x23,y23)。可见,本公开示例性实施例在完成第二帧眼部图像的光斑检测后,可以将Array=[(x11,y11),(x12,y12),(x13,y13)]更新为Array=[(x21,y21),(0,0),(x23,y23)],从而获得第一光斑、第二光斑和第三光斑在第二帧眼部图像的跟踪结果。而由于最新的光斑序列数组继承了第一帧眼部图像的光斑在光斑序列数组中的位置,因此,本公开示例性实施例可以通过最新的光斑序列数组确定第一帧眼部图像中的光斑在第二帧眼部图像中是否消失,以及在第二帧眼部图像中消失的光斑的序号,以及被追踪到的光斑像素坐标。基于此,本公开示例性实施例在针对指定光斑进行追踪时,可以基于指定光斑在光斑序列数组的位置,查询最新的光斑序列数组,以确定其是否在最新的眼部图像中被追踪到。如果从眼部视频序列获取到光斑的运动轨迹,也可以通过记录光斑序列数组的历史信息,基于光斑序列数组的历史信息,从而得到眼部视频序列中光斑的运动轨迹。
本公开实施例中提供的一个或多个技术方案,考虑到帧间光斑移动距离短的情况,在已经检测到目标帧眼部图像含有光斑后,基于光斑从该待检测帧眼部图像获取局部检测区,从而缩小待检测帧眼部图像的光斑检测范围。在此基础上,可以将局部检测区内亮度最高的位置设定为第一目标位置,然后基于第一目标位置的像素亮度判断结果确定出待检测帧眼部图像的光斑识别结果,从而确定是否在待检测帧眼部图像中追踪到光斑,以提高光斑检测准确性。可见,本公开示例性实施例的方法在进行光斑追踪时,结合光斑的帧间移动特性,在保证光斑检测准确率的情况下,对待检测帧眼部图像进行局部光斑检测,从而避免采用全局搜索方式对图像进行光斑检测。
经过测试发现,本公开示例性实施例的方法相对于全局搜索方法可以将计算量缩小至10^3倍,可见,本公开示例性实施例的方法可以在保证光斑检测准确率的情况下,降低计算量和设备功耗,提高移动式设备的续航能力。
上述主要从电子设备的角度对本公开实施例提供的方案进行了介绍。可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本公开能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
本公开实施例可以根据上述方法示例对电子设备进行功能单元的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的 是,本公开实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,本公开示例性实施例提供一种光斑追踪装置,该光斑追踪装置可以为电子设备或应用于电子设备的芯片。图5示出了根据本公开示例性实施例的光斑追踪装置的功能示意性框图。如图5所示,该光斑追踪装置500包括:
提取模块501,用于从眼部图像序列提取目标帧眼部图像和待检测帧眼部图像,所述目标帧眼部图像含有光斑,所述目标帧眼部图像的采集时间先于所述待检测帧眼部图像的采集时间;
获取模块502,用于基于所述光斑从所述待检测帧眼部图像获取局部检测区;
识别模块503,用于响应于位于局部检测区中第一目标位置的像素亮度判断结果,确定待检测帧眼部图像的光斑识别结果,所述第一目标位置为所述局部检测区内亮度最高的位置。
在一种可能的实现方式中,所述局部检测区的几何中心与光斑中心重合。
在一种可能的实现方式中,所述光斑中心为所述光斑中亮度最高的位置;或,
所述光斑中心为所述光斑的几何中心,所述局部检测区的尺寸大于所述光斑的尺寸。
在一种可能的实现方式中,如图5所示,所述获取模块502用于基于所述图像采集设备的采集帧率确定所述局部检测区的尺寸参数,基于所述目标帧眼部图像的光斑中心和所述局部检测区的尺寸参数确定所述局部检测区。
在一种可能的实现方式中,所述局部检测区的尺寸与所述帧间光斑移动距离正相关。
在一种可能的实现方式中,如图5所示,所述识别模块503还用于采用灰度质心算法检测所述局部检测区,获得所述第一目标位置的像素亮度。
在一种可能的实现方式中,如图5所示,所述第一目标位置的像素亮度为多个所述第一目标位置的亮度加权值,所述识别模块503还用于采用多种光斑检测算法检测检测所述局部检测区,获得多个所述第一目标位置的亮度,对多个所述第一目标位置的亮度进行加权,获得所述第一目标位置的亮度加权值。
在一种可能的实现方式中,如图5所示,所述识别模块503用于若位于所述局部检测区中的所述第一目标位置的像素亮度满足光斑中心约束条件,确定所述光斑识别结果包括:所述局部检测区与待检测光斑具有交集,所述待检测光斑为所述目标帧眼部图像的光斑在所述待检测帧眼部图像中被追踪到的光斑;若位于所述局部检测区中的所述第一目标位置的像素亮度不满足光斑中心约束条件,确定所述光斑识别结果包括:所述目标帧眼部图像的光斑在所述待检测帧眼部图像中消失。
在一种可能的实现方式中,如图5所示,所述目标帧眼部图像的光斑数量为多个,所述识别模块503还用于将所述目标帧眼部图像的多个光斑中心坐标保存在光斑序列数组中;若位于所述局部检测区中的所述第一目标位置满足光斑中心约束条件,使用所述第一目标位置的坐标更新所述光斑序列数组中所述局部检测区对应的光斑中心坐标;若位于所述局部检测区中的所述第一目标位置不满足光斑中心约束条件,在所述光斑序列数组中所述局部检测区对应的位置使用空数组占位。
在一种可能的实现方式中,所述光斑中心约束条件包括:所述第一目标位置的像素亮度与目标亮度的差值大于或等于亮度阈值,其中,所述目标亮度为至少一个第二目标位置的像素亮度确定的亮度,所述第二目标位置位于所述局部检测区的边缘。
在一种可能的实现方式中,所述第二目标位置与所述第一目标位置之间的距离满足:
d≤d’≤dmax,d表示所述第一目标位置与所述局部检测区的轮廓位置之间的距离,d’表示所述第一目标位置与所述第二目标位置之间的距离,dmax表示所述第一目标位置与所述第二目标位置之间的最大距离。
图6示出了根据本公开示例性实施例的芯片的示意性框图。如图6所示,该芯片600包括一个或两个以上(包括两个)处理器601和通信接口602。通信接口602可以支持电子设备执行上述方法中的数据收发步骤,处理器601可以支持电子设备执行上述方法中的数据处理步骤。
可选的,如图6所示,该芯片600还包括存储器603,存储器603可以包括只读存储器和随机存取存储器,并向处理器提供操作指令和数据。存储器的一部分还可以包括非易失性随机存取存储器(non-volatile random access memory,NVRAM)。
在一些实施方式中,如图6所示,处理器601通过调用存储器存储的操作指令(该操作指令可存储在操作系统中),执行相应的操作。处理器601控制终端设备中任一个的处理操作,处理器还可以称为中央处理单元(central processing unit,CPU)。存储器603可以包括只读存储器和随机存取存储器,并向处理器601提供指令和数据。存储器603的一部分还可以包括NVRAM。例如应用中存储器、通信接口以及存储器通过总线系统耦合在一起,其中总线系统除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图6中将各种总线都标为总线系统604。
上述本公开实施例揭示的方法可以应用于处理器中,或者由处理器实现。处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(digital signal processing,DSP)、ASIC、现成可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。 可以实现或者执行本公开实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本公开实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
本公开示例性实施例还提供一种电子设备,包括:至少一个处理器;以及与至少一个处理器通信连接的存储器。所述存储器存储有能够被所述至少一个处理器执行的计算机程序,所述计算机程序在被所述至少一个处理器执行时用于使所述电子设备执行根据本公开实施例的方法。
本公开示例性实施例还提供一种存储有计算机程序的非瞬时计算机可读存储介质,其中,所述计算机程序在被计算机的处理器执行时用于使所述计算机执行根据本公开实施例的方法。
本公开示例性实施例还提供一种计算机程序产品,包括计算机程序,其中,所述计算机程序在被计算机的处理器执行时用于使所述计算机执行根据本公开实施例的方法。
参考图7,现将描述可以作为本公开的服务器或客户端的电子设备700的结构框图,其是可以应用于本公开的各方面的硬件设备的示例。电子设备旨在表示各种形式的数字电子的计算机设备,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。
如图7所示,电子设备700包括计算单元701,其可以根据存储在只读存储器(ROM)702中的计算机程序或者从存储单元708加载到随机访问存储器(RAM)703中的计算机程序,来执行各种适当的动作和处理。在RAM 703中,还可存储设备700操作所需的各种程序和数据。计算单元701、ROM 702以及RAM 703通过总线704彼此相连。输入/输出(I/O)接口705也连接至总线704。
如图7所示,电子设备700中的多个部件连接至I/O接口705,包括:输入单元706、输出单元707、存储单元708以及通信单元709。输入单元706可以是能向电子设备700输入信息的任何类型的设备,输入单元706可以接收输入的数字或字符信息,以及产生与电子设备的用户设置和/或功能控制有关的键信号输入。输出单元707可以是能呈现信息的任何类型的设备,并且可以包括但不限于显示器、扬声器、视频/音频输出终端、振动器和/或 打印机。存储单元708可以包括但不限于磁盘、光盘。通信单元709允许电子设备700通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据,并且可以包括但不限于调制解调器、网卡、红外通信设备、无线通信收发机和/或芯片组,例如蓝牙TM设备、WiFi设备、WiMax设备、蜂窝通信设备和/或类似物。
如图7所示,计算单元701可以是各种具有处理和计算能力的通用和/或专用处理组件。计算单元701的一些示例包括但不限于中央处理单元(CPU)、图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器学习模型算法的计算单元、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。计算单元701执行上文所描述的各个方法和处理。例如,在一些实施例中,本公开示例性实施例的方法可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元708。在一些实施例中,计算机程序的部分或者全部可以经由ROM 702和/或通信单元709而被载入和/或安装到电子设备700上。在一些实施例中,计算单元701可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行方法。
用于实施本公开的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
如本公开使用的,术语“机器可读介质”和“计算机可读介质”指的是用于将机器指令和/或数据提供给可编程处理器的任何计算机程序产品、设备、和/或装置(例如,磁盘、光盘、存储器、可编程逻辑装置(PLD),包括,接收作为机器可读信号的机器指令的机器可读介质。术语“机器可读信号”指的是用于将机器指令和/或数据提供给可编程处理器的任何信号。
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视 器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机程序或指令。在计算机上加载和执行所述计算机程序或指令时,全部或部分地执行本公开实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、终端、显示终端或者其它可编程装置。所述计算机程序或指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机程序或指令可以从一个网站站点、计算机、服务器或数据中心通过有线或无线方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是集成一个或多个可用介质的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,例如,软盘、硬盘、磁带;也可以是光介质,例如,数字视频光盘(digital video disc,DVD);还可以是半导体介质,例如,固态硬盘(solid state drive,SSD)。
尽管结合具体特征及其实施例对本公开进行了描述,显而易见的,在不脱离本公开的精神和范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的本公开的示例性说明,且视为已覆盖本公开范围内的任意和所有修改、变化、组合或等同物。显然,本领域的技术人员可以对本公开进行各种改动和变型而不脱离本公开的精神和范围。这样,倘若本公开的这些修改和变型属于本公开权利要求及其等同技术的范围之内,则本公开也意图包括这些改动和变型在内。

Claims (14)

  1. 一种光斑追踪方法,所述方法包括:
    从眼部图像序列提取目标帧眼部图像和待检测帧眼部图像,所述目标帧眼部图像含有光斑,所述目标帧眼部图像的采集时间先于所述待检测帧眼部图像的采集时间;
    基于所述光斑从所述待检测帧眼部图像获取局部检测区;
    响应于位于所述局部检测区中第一目标位置的像素亮度判断结果,确定待检测帧眼部图像的光斑识别结果,所述第一目标位置为所述局部检测区内亮度最高的位置。
  2. 根据权利要求1所述的方法,其中所述局部检测区的几何中心与光斑中心重合。
  3. 根据权利要求2所述的方法,其中所述光斑中心为所述光斑中亮度最高的位置;或,
    所述光斑中心为所述光斑的几何中心,所述局部检测区的尺寸大于所述光斑的尺寸。
  4. 根据权利要求1~3中任一项所述的方法,其中所述眼部图像序列由图像采集设备采集,所述基于光斑从所述待检测帧眼部图像获取局部检测区,包括:
    基于所述图像采集设备的采集帧率确定所述局部检测区的尺寸参数;
    基于所述目标帧眼部图像的光斑中心和所述局部检测区的尺寸参数确定所述局部检测区。
  5. 根据权利要求4所述的方法,其中所述局部检测区的尺寸与所述帧间光斑移动距离正相关。
  6. 根据权利要求1~5中任一项所述的方法,所述方法还包括:
    采用灰度质心算法检测所述局部检测区,获得所述第一目标位置的像素亮度。
  7. 根据权利要求1~6中任一项所述的方法,其中所述第一目标位置的像素亮度为多个所述第一目标位置的亮度加权值,所述方法还包括:
    采用多种光斑检测算法检测所述局部检测区,获得多个所述第一目标位置的亮度;
    对多个所述第一目标位置的亮度进行加权,获得所述第一目标位置的亮度加权值。
  8. 根据权利要求1~7中任一项所述的方法,其中所述响应于位于局部检测区中第一目标位置的像素亮度判断结果,确定待检测帧眼部图像的光斑识别结果,包括:
    若位于所述局部检测区中的所述第一目标位置的像素亮度满足光斑中心约束条件,确定所述光斑识别结果包括:所述局部检测区与待检测光斑具有交集,所述待检测光斑为所述目标帧眼部图像的光斑在所述待检测帧眼部图像中被追踪到的光斑;
    若位于所述局部检测区中的所述第一目标位置的像素亮度不满足光斑中心约束条件,确定所述光斑识别结果包括:所述目标帧眼部图像的光斑在所述待检测帧眼部图像中消失。
  9. 根据权利要求1~8中任一项所述的方法,其中所述目标帧眼部图像的光斑数量为多 个,所述方法还包括:
    将所述目标帧眼部图像的多个光斑中心坐标保存在光斑序列数组中;
    若位于所述局部检测区中的所述第一目标位置满足光斑中心约束条件,使用所述第一目标位置的坐标更新所述光斑序列数组中所述局部检测区对应的光斑中心坐标;
    若位于所述局部检测区中的所述第一目标位置不满足光斑中心约束条件,在所述光斑序列数组中所述局部检测区对应的位置使用空数组占位。
  10. 根据权利要求8所述的方法,其中所述光斑中心约束条件包括:所述第一目标位置的像素亮度与目标亮度的差值大于或等于亮度阈值;其中,所述目标亮度为至少一个第二目标位置的像素亮度确定的亮度,所述第二目标位置位于所述局部检测区的边缘。
  11. 根据权利要求8所述的方法,其中所述第二目标位置与所述第一目标位置之间的距离满足:
    d≤d’≤dmax,d表示所述第一目标位置与所述局部检测区的轮廓位置之间的距离,d’表示所述第一目标位置与所述第二目标位置之间的距离,dmax表示所述第一目标位置与所述第二目标位置之间的最大距离。
  12. 一种光斑追踪装置,所述装置包括:
    提取模块,用于从眼部图像序列提取目标帧眼部图像和待检测帧眼部图像,所述目标帧眼部图像含有光斑,所述目标帧眼部图像的采集时间先于所述待检测帧眼部图像的采集时间;
    获取模块,用于基于所述光斑从所述待检测帧眼部图像获取局部检测区;
    识别模块,用于响应于位于所述局部检测区中第一目标位置的像素亮度判断结果,确定待检测帧眼部图像的光斑识别结果,所述第一目标位置为所述局部检测区内亮度最高的位置。
  13. 一种电子设备,包括:
    至少一个处理器;以及,
    存储程序的存储器;
    其中,所述程序包括指令,所述指令在由所述处理器执行时使所述处理器执行根据权利要求1~11中任一项所述的方法。
  14. 一种非瞬时计算机可读存储介质,所述非瞬时计算机可读存储介质存储有计算机指令的所述计算机指令用于使所述计算机执行根据权利要求1~11中任一项所述的方法。
PCT/CN2023/132630 2022-12-14 2023-11-20 一种光斑追踪方法、装置、电子设备及存储介质 WO2024125217A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211608969.8 2022-12-14
CN202211608969.8A CN115965653B (zh) 2022-12-14 2022-12-14 一种光斑追踪方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2024125217A1 true WO2024125217A1 (zh) 2024-06-20

Family

ID=87362795

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/132630 WO2024125217A1 (zh) 2022-12-14 2023-11-20 一种光斑追踪方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN115965653B (zh)
WO (1) WO2024125217A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965653B (zh) * 2022-12-14 2023-11-07 北京字跳网络技术有限公司 一种光斑追踪方法、装置、电子设备及存储介质
CN117053718B (zh) * 2023-10-11 2023-12-12 贵州黔程弘景工程咨询有限责任公司 基于梁底线形测量的梁底线形模型生成方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110221919A1 (en) * 2010-03-11 2011-09-15 Wenbo Zhang Apparatus, method, and system for identifying laser spot
CN107506023A (zh) * 2017-07-20 2017-12-22 武汉秀宝软件有限公司 一种墙面图像红外线光斑的追踪方法及系统
CN110191287A (zh) * 2019-06-28 2019-08-30 Oppo广东移动通信有限公司 对焦方法和装置、电子设备、计算机可读存储介质
CN112991393A (zh) * 2021-04-15 2021-06-18 北京澎思科技有限公司 目标检测与跟踪方法、装置、电子设备及存储介质
CN115965653A (zh) * 2022-12-14 2023-04-14 北京字跳网络技术有限公司 一种光斑追踪方法、装置、电子设备及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109857254B (zh) * 2019-01-31 2020-06-05 京东方科技集团股份有限公司 瞳孔定位方法和装置、vr/ar设备以及计算机可读介质
CN110051319A (zh) * 2019-04-23 2019-07-26 七鑫易维(深圳)科技有限公司 眼球追踪传感器的调节方法、装置、设备及存储介质
CN114429670A (zh) * 2020-10-29 2022-05-03 北京七鑫易维信息技术有限公司 瞳孔检测方法、装置、设备及存储介质
CN112287805A (zh) * 2020-10-29 2021-01-29 地平线(上海)人工智能技术有限公司 运动物体的检测方法、装置、可读存储介质及电子设备
CN112692453B (zh) * 2020-12-16 2022-07-15 西安中科微精光子科技股份有限公司 利用高速相机识别气膜孔穿透区域的方法、系统及介质
CN113887547B (zh) * 2021-12-08 2022-03-08 北京世纪好未来教育科技有限公司 关键点检测方法、装置和电子设备
CN114862828A (zh) * 2022-05-30 2022-08-05 Oppo广东移动通信有限公司 光斑搜索方法装置、计算机可读介质和电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110221919A1 (en) * 2010-03-11 2011-09-15 Wenbo Zhang Apparatus, method, and system for identifying laser spot
CN107506023A (zh) * 2017-07-20 2017-12-22 武汉秀宝软件有限公司 一种墙面图像红外线光斑的追踪方法及系统
CN110191287A (zh) * 2019-06-28 2019-08-30 Oppo广东移动通信有限公司 对焦方法和装置、电子设备、计算机可读存储介质
CN112991393A (zh) * 2021-04-15 2021-06-18 北京澎思科技有限公司 目标检测与跟踪方法、装置、电子设备及存储介质
CN115965653A (zh) * 2022-12-14 2023-04-14 北京字跳网络技术有限公司 一种光斑追踪方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN115965653A (zh) 2023-04-14
CN115965653B (zh) 2023-11-07

Similar Documents

Publication Publication Date Title
WO2024125217A1 (zh) 一种光斑追踪方法、装置、电子设备及存储介质
EP3739431B1 (en) Method for determining point of gaze, contrast adjustment method and device, virtual reality apparatus, and storage medium
CN110826519A (zh) 人脸遮挡检测方法、装置、计算机设备及存储介质
KR20200118076A (ko) 생체 검출 방법 및 장치, 전자 기기 및 저장 매체
CN112132265B (zh) 模型训练方法、杯盘比确定方法、装置、设备及存储介质
CN111598038B (zh) 脸部特征点检测方法、装置、设备及存储介质
WO2020047854A1 (en) Detecting objects in video frames using similarity detectors
EP3699808B1 (en) Facial image detection method and terminal device
CN113780201B (zh) 手部图像的处理方法及装置、设备和介质
US11694331B2 (en) Capture and storage of magnified images
CN112135041B (zh) 一种人脸特效的处理方法及装置、存储介质
CN116491893B (zh) 高度近视眼底改变评估方法及装置、电子设备及存储介质
CN115439543A (zh) 孔洞位置的确定方法和元宇宙中三维模型的生成方法
CN114461078B (zh) 一种基于人工智能的人机交互方法
CN114821756A (zh) 一种基于深度学习的眨眼数据统计方法和装置
CN113255456B (zh) 非主动活体检测方法、装置、电子设备及存储介质
CN109726741B (zh) 一种多目标物体的检测方法及装置
CN117392735B (zh) 面部数据处理方法、装置、计算机设备和存储介质
WO2024016809A1 (zh) 刷掌验证的引导方法、装置、终端、存储介质及程序产品
US20220301127A1 (en) Image processing pipeline for optimizing images in machine learning and other applications
JP2010271922A (ja) 話中人物検出方法、話中人物検出装置、および話中人物検出プログラム
CN117218701A (zh) 快速校准的视线追踪方法
CN117292417A (zh) 一种关键点检测方法、系统以及相关装置
CN117392734A (zh) 面部数据处理方法、装置、计算机设备和存储介质
CN115984950A (zh) 视线检测方法、装置、电子设备及存储介质