CN114740973A - A tracking method and device - Google Patents

A tracking method and device Download PDF

Info

Publication number
CN114740973A
CN114740973A CN202210309192.9A CN202210309192A CN114740973A CN 114740973 A CN114740973 A CN 114740973A CN 202210309192 A CN202210309192 A CN 202210309192A CN 114740973 A CN114740973 A CN 114740973A
Authority
CN
China
Prior art keywords
area
controlled
information
sight line
sensor module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210309192.9A
Other languages
Chinese (zh)
Inventor
丁博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202210309192.9A priority Critical patent/CN114740973A/en
Publication of CN114740973A publication Critical patent/CN114740973A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a tracking method and a device, wherein the tracking method comprises the following steps: the method comprises the steps of obtaining position information of a first object relative to a region to be controlled and sight line parameters of the first object, and inputting the position information and the sight line parameters into a target tracking model to determine the gaze position of the first object in the region to be controlled. The target tracking model comprises a first tracking model obtained by processing an original tracking model based on change information of a sensor module and the display area, and the sensor module is used for acquiring the position information and/or the sight line parameters.

Description

一种追踪方法及装置A tracking method and device

技术领域technical field

本申请涉及电子技术领域,尤其涉及一种追踪方法及装置。The present application relates to the field of electronic technologies, and in particular, to a tracking method and device.

背景技术Background technique

眼球追踪是一种机器视觉技术,通过设备捕捉用户眼部图像,然后采用算法进行分析,最后得到用户注视显示屏幕位置的技术,具有较高的应用价值。眼球追踪技术需要首先确定用户眼部与显示屏幕的相对位置关系,但是对于用户在较大范围内尤其是用户眼部与显示屏幕的相对位置较远的情况下,较难确定用户眼部与显示屏幕的相对位置关系。Eye tracking is a machine vision technology that captures the user's eye image through the device, analyzes it with an algorithm, and finally obtains the position of the user's gaze on the display screen, which has high application value. Eye tracking technology needs to first determine the relative positional relationship between the user's eyes and the display screen. The relative position of the screen.

对此,目前需要针对特定机型的显示屏幕,将眼球追踪设备固定在与显示屏幕具有确定相对位置的地方,实现眼球追踪。但是,对于显示屏幕的触控系统而言,由于显示屏幕与眼球追踪设备的具有确定的相对位置关系,并且显示屏幕的尺寸和位置为固定位置,因此用户只能在距离显示屏幕较近并且用户位置与显示屏幕位置具有特定角度范围之内使用眼球追踪系统。此外,由于眼球追踪设备必须与对应的显示设备的屏幕的相对位置固定,用户必须在一个很近的距离下使用眼球追踪系统,使用限制较多,使用较为不便。In this regard, it is currently necessary for the display screen of a specific model to fix the eye tracking device at a place with a definite relative position to the display screen to realize eye tracking. However, for a touch screen display system, since the display screen has a definite relative positional relationship with the eye tracking device, and the size and position of the display screen are fixed, the user can Use the eye tracking system to position the display screen within a specific angular range. In addition, since the relative position of the eye-tracking device and the screen of the corresponding display device must be fixed, the user must use the eye-tracking system at a very close distance, which is more restricted and inconvenient to use.

发明内容SUMMARY OF THE INVENTION

本申请实施例提供一种追踪方法及装置。Embodiments of the present application provide a tracking method and device.

根据本申请实施例第一方面,提供了一种追踪方法,该方法包括:获得第一对象相对于待控制区域的位置信息以及所述第一对象的视线参数,所述待控制区域至少包括电子设备用于输出显示内容的显示区域;将所述位置信息和所述视线参数输入目标追踪模型,以确定所述第一对象在所述待控制区域中的注视位置;其中,所述目标追踪模型包括基于传感器模组与所述显示区域的变化信息对原始追踪模型进行处理得到的第一追踪模型,所述传感器模组用于采集所述位置信息和/或所述视线参数。According to a first aspect of the embodiments of the present application, a tracking method is provided, the method includes: obtaining position information of a first object relative to a to-be-controlled area and line-of-sight parameters of the first object, where the to-be-controlled area at least includes electronic The device is used to output the display area of the displayed content; the position information and the sight line parameter are input into a target tracking model to determine the gaze position of the first object in the area to be controlled; wherein, the target tracking model It includes a first tracking model obtained by processing the original tracking model based on the change information of the sensor module and the display area, and the sensor module is used for collecting the position information and/or the line of sight parameter.

根据本申请一实施方式,其中,将所述第一位置信息和所述视线参数输入目标追踪模型,包括:获得所述传感器模组与所述显示区域的变化信息;如果所述变化信息符合调整条件,将所述位置信息和所述视线参数输入所述第一追踪模型;且/或,如果所述变化信息不符合调整条件,将所述位置信息和所述视线参数输入所述原始追踪模型,所述原始追踪模型是基于所述第一对象或不同于所述第一对象的第二对象训练的模型。According to an embodiment of the present application, inputting the first position information and the line-of-sight parameters into a target tracking model includes: obtaining change information of the sensor module and the display area; if the change information conforms to the adjustment condition, input the position information and the line of sight parameters into the first tracking model; and/or, if the change information does not meet the adjustment conditions, input the position information and the line of sight parameters into the original tracking model , the original tracking model is a model trained based on the first object or a second object different from the first object.

根据本申请一实施方式,其中,如果所述显示区域、所述传感器模组、或、所述传感器模组与所述显示区域之间的相对位置关系中的至少之一的变化量不小于第一设定值,确定所述变化信息符合调整条件;且/或,如果所述显示区域、所述传感器模组、以及所述传感器模组与所述显示区域之前的相对位置关系中的任意之一无变化或至少之一的变化量小于第二设定值,确定所述变化信息不符合调整条件。According to an embodiment of the present application, if the change amount of at least one of the display area, the sensor module, or the relative positional relationship between the sensor module and the display area is not less than the first A set value to determine that the change information meets the adjustment conditions; and/or, if any of the display area, the sensor module, and the relative positional relationship between the sensor module and the display area before There is no change or at least one of the changes is smaller than the second set value, and it is determined that the change information does not meet the adjustment condition.

根据本申请一实施方式,所述原始追踪模型的训练过程包括:获取所述待控制区域的区域信息,所述区域信息至少包括所述待控制区域的尺寸和/或形状;获得校准对象与所述待控制区域之间的距离、以及所述校准对象注视所述待控制区域内多个指定位置的视线参数,所述视线参数包括视线角度和视线高度;基于所述区域信息、所述距离和所述视线参数构建三维空间坐标系,作为所述原始追踪模型;其中,所述校准对象为所述第一对象或不同于所述第一对象的第二对象。According to an embodiment of the present application, the training process of the original tracking model includes: acquiring area information of the to-be-controlled area, where the area information at least includes the size and/or shape of the to-be-controlled area; The distance between the areas to be controlled and the sight line parameters of the calibration object looking at multiple designated positions in the area to be controlled, the sight line parameters include sight angle and sight height; based on the area information, the distance and The sight line parameter constructs a three-dimensional space coordinate system as the original tracking model; wherein, the calibration object is the first object or a second object different from the first object.

根据本申请一实施方式,其中,确定所述第一对象在所述待控制区域中的注视位置,包括:确定第一对象的眼球相对于所述显示区域的垂直距离;确定所述第一对象当前的视线与所述显示区域的夹角以及所述第一对象当前的视线高度;根据所述夹角、视线高度和所述垂直距离,确定所述视线在所述显示区域上的交点,作为所述注视位置。According to an embodiment of the present application, determining the gaze position of the first object in the area to be controlled includes: determining the vertical distance of the eyeball of the first object relative to the display area; determining the first object The angle between the current sight line and the display area and the current sight height of the first object; according to the included angle, the sight height and the vertical distance, determine the intersection point of the sight line on the display area, as the gaze position.

根据本申请一实施方式,其中,基于传感器模组与所述显示区域的变化信息对原始追踪模型进行处理,包括:基于所述变化信息确定所述第一对象相对于所述待控制区域的位置变化信息以及所述第一对象的视线变化参数;基于所述位置变化信息和所述视线变化参数对所述原始追踪模型进行调参,得到所述第一追踪模型。According to an embodiment of the present application, wherein processing the original tracking model based on the change information of the sensor module and the display area includes: determining the position of the first object relative to the area to be controlled based on the change information change information and line-of-sight change parameters of the first object; adjust parameters of the original tracking model based on the position change information and the line-of-sight change parameters to obtain the first tracking model.

根据本申请一实施方式,所述传感器模组包括第一模组和第二模组;相应的,所述获得第一对象相对于待控制区域的位置信息以及所述第一对象的视线参数,包括:如果所述第一对象超出所述第一模组的采集范围,执行:输出提示信息;和/或,控制第二模组获得所述位置信息以及所述视线参数。According to an embodiment of the present application, the sensor module includes a first module and a second module; correspondingly, the obtaining the position information of the first object relative to the area to be controlled and the line-of-sight parameters of the first object, The method includes: if the first object exceeds the collection range of the first module, executing: outputting prompt information; and/or, controlling the second module to obtain the position information and the line of sight parameter.

根据本申请一实施方式,所述方法还包括:获得多个对象相对于待控制区域的位置信息以及所述多个对象的视线参数,所述多个对象中包括第一对象;基于所述第一对象在所述待控制区域中的注视位置、所述第一对象与所述多个对象中的第二对象的位置关系、以及所述第二对象的视线参数确定所述第二对象在所述待控制区域中的注视位置。According to an embodiment of the present application, the method further includes: obtaining position information of multiple objects relative to the area to be controlled and line-of-sight parameters of the multiple objects, where the multiple objects include a first object; The gaze position of an object in the to-be-controlled area, the positional relationship between the first object and the second object among the plurality of objects, and the line-of-sight parameters of the second object determine where the second object is located. Describe the gaze position in the area to be controlled.

根据本申请一实施方式,所述方法还包括:获得第一对象在当前时刻相对于待控制区域的第一位置信息以及所述第一对象的第一视线参数;获得所述第一对象的运动信息和视线变化信息;基于所述运动信息和所述视线变化信息预测所述第一对象在目标时刻的第二视线参数及其相对于所述待控制区域的第二位置信息;基于所述第二视线参数和所述第二位置信息确定所述第一对象在所述待控制区域的注视位置。According to an embodiment of the present application, the method further includes: obtaining first position information of the first object relative to the area to be controlled at the current moment and first sight line parameters of the first object; obtaining the motion of the first object information and line-of-sight change information; predict the second line-of-sight parameter of the first object at the target moment and its second position information relative to the area to be controlled based on the motion information and the line-of-sight change information; The second line of sight parameter and the second position information determine the gaze position of the first object in the area to be controlled.

根据本申请第二方面,还提供了一种追踪装置,包括:获取模块,用于获得第一对象相对于待控制区域的位置信息以及所述第一对象的视线参数,所述待控制区域至少包括电子设备用于输出显示内容的显示区域;追踪模块,用于将所述位置信息和所述视线参数输入目标追踪模型,以确定所述第一对象在所述待控制区域中的注视位置;其中,所述目标追踪模型包括基于传感器模组与所述显示区域的变化信息对原始追踪模型进行处理得到的第一追踪模型,所述传感器模组用于采集所述位置信息和/或所述视线参数。According to a second aspect of the present application, there is also provided a tracking device, comprising: an acquisition module configured to acquire position information of a first object relative to a to-be-controlled area and line-of-sight parameters of the first object, wherein the to-be-controlled area at least It includes a display area where the electronic device is used to output display content; a tracking module is used to input the position information and the sight line parameter into a target tracking model to determine the gaze position of the first object in the to-be-controlled area; Wherein, the target tracking model includes a first tracking model obtained by processing the original tracking model based on the change information of the sensor module and the display area, and the sensor module is used to collect the position information and/or the line of sight parameters.

根据本申请第三方面,还提供了所述设备包括至少一个处理器、以及与所述处理器连接的至少一个存储器、总线;其中,所述处理器、所述存储器通过所述总线完成相互间的通信;所述处理器用于调用所述存储器中的程序指令,以执行上述追踪方法。According to a third aspect of the present application, it is further provided that the device includes at least one processor, and at least one memory and a bus connected to the processor; wherein, the processor and the memory communicate with each other through the bus. communication; the processor is used for calling program instructions in the memory to execute the above tracking method.

本申请实施例追踪方法及装置,获得第一对象相对于待控制区域的位置信息以及所述第一对象的视线参数,并将所述位置信息和所述视线参数输入目标追踪模型,以确定所述第一对象在所述待控制区域中的注视位置。其中,所述待控制区域至少包括电子设备用于输出显示内容的显示区域,并且所述目标追踪模型包括基于传感器模组与所述显示区域的变化信息对原始追踪模型进行处理得到的第一追踪模型,所述传感器模组用于采集所述位置信息和/或所述视线参数。由此,能够基于传感器模组与所述显示区域的变化信息对原始追踪模型进行处理得到的目标追踪模型,并使用该目标追踪模型对第一对象对注视位置进行追踪。有效避免了在传感器模组或电子设备的显示区域的位置或型号规格发生变化的情况下,必须将传感器模组和电子设备进行复杂的联合调试的情况。传感器模组可以单独独立部署,无需固定在屏幕的特定位置。使得用户对电子设备的追踪功能的使用不再局限于距离显示区域较近的距离,有效拓宽了用户对电子设备的追踪功能的使用位置和视角。The tracking method and device according to the embodiment of the present application obtain the position information of the first object relative to the area to be controlled and the line of sight parameters of the first object, and input the position information and the line of sight parameters into the target tracking model to determine the target tracking model. the gaze position of the first object in the to-be-controlled area. The to-be-controlled area at least includes a display area used by the electronic device to output display content, and the target tracking model includes a first tracking model obtained by processing the original tracking model based on the change information between the sensor module and the display area. The sensor module is used to collect the position information and/or the line of sight parameter. Thus, a target tracking model obtained by processing the original tracking model based on the change information of the sensor module and the display area can be used to track the gaze position of the first object by using the target tracking model. This effectively avoids the complex joint debugging of the sensor module and the electronic device when the position or model specification of the sensor module or the display area of the electronic device is changed. Sensor modules can be deployed independently and do not need to be fixed to a specific location on the screen. The use of the tracking function of the electronic device by the user is no longer limited to a short distance from the display area, effectively broadening the user's use position and viewing angle of the tracking function of the electronic device.

需要理解的是,本申请的教导并不需要实现上面所述的全部有益效果,而是特定的技术方案可以实现特定的技术效果,并且本申请的其他实施方式还能够实现上面未提到的有益效果。It should be understood that the teachings of the present application do not need to achieve all the above-mentioned beneficial effects, but specific technical solutions can achieve specific technical effects, and other embodiments of the present application can also achieve the beneficial effects not mentioned above. Effect.

附图说明Description of drawings

通过参考附图阅读下文的详细描述,本申请示例性实施方式的上述以及其他目的、特征和优点将变得易于理解。在附图中,以示例性而非限制性的方式示出了本申请的若干实施方式,其中:The above and other objects, features and advantages of exemplary embodiments of the present application will become readily understood by reading the following detailed description with reference to the accompanying drawings. In the accompanying drawings, several embodiments of the present application are shown by way of example and not limitation, wherein:

在附图中,相同或对应的标号表示相同或对应的部分。In the drawings, the same or corresponding reference numerals denote the same or corresponding parts.

图1示出了本申请实施例追踪方法的应用场景示意图;FIG. 1 shows a schematic diagram of an application scenario of a tracking method according to an embodiment of the present application;

图2示出了本申请实施例追踪方法的实现流程示意图;FIG. 2 shows a schematic flowchart of the implementation of the tracking method according to the embodiment of the present application;

图3示出了本申请实施例追踪方法的显示区域和对象的空间对应关系;FIG. 3 shows the spatial correspondence between the display area and the object of the tracking method according to the embodiment of the present application;

图4示出了本申请实施例追踪装置的组成结构示意图。FIG. 4 shows a schematic diagram of the composition and structure of a tracking device according to an embodiment of the present application.

具体实施方式Detailed ways

下面将参考若干示例性实施方式来描述本申请的原理和精神。应当理解,给出这些实施方式仅仅是为使本领域技术人员能够更好地理解进而实现本申请,而并非以任何方式限制本申请的范围。相反,提供这些实施方式是为使本申请更加透彻和完整,并能够将本申请的范围完整地传达给本领域的技术人员。The principles and spirit of the present application will be described below with reference to several exemplary embodiments. It should be understood that these embodiments are only provided for those skilled in the art to better understand and implement the present application, but do not limit the scope of the present application in any way. Rather, these embodiments are provided so that this application will be thorough and complete, and will fully convey the scope of this application to those skilled in the art.

下面结合附图和具体实施例对本申请的技术方案进一步详细阐述。The technical solutions of the present application will be further elaborated below with reference to the accompanying drawings and specific embodiments.

图1示出了本申请实施例追踪方法的应用场景示意图。FIG. 1 shows a schematic diagram of an application scenario of the tracking method according to the embodiment of the present application.

参考图1,追踪系统对用户的视线进行追踪,追踪系统至少包括传感器模组,用户站在面对显示区域的并且位于传感器模组工作范围内的任一位置,传感器模组可以采集用户对视线参数和用户相对于显示区域的位置信息。用户可以是真实的人,也可以是机器人或需要进行视线追踪的动物等,因此,在下文中以第一对象和校准对象等对方案进行说明。传感器模组可以包括图像采集装置等。显示区域中有四个预先设定的注视位置,并且这里显示区域即为待控制区域。Referring to Figure 1, the tracking system tracks the user's line of sight. The tracking system includes at least a sensor module. The user stands at any position facing the display area and within the working range of the sensor module. The sensor module can collect the user's line of sight. Parameters and user position information relative to the display area. The user may be a real person, a robot or an animal that needs to perform gaze tracking, etc. Therefore, the solution will be described below with a first object and a calibration object, etc. The sensor module may include an image acquisition device and the like. There are four preset gaze positions in the display area, and the display area here is the area to be controlled.

需要说明的是,以上应用场景仅仅是为了对本申请实施例追踪方法对具体实施细节进行更好的说明而给出的应用场景,并不用于对本申请追踪方法进行的实施进行限制。It should be noted that the above application scenarios are only application scenarios given to better describe the specific implementation details of the tracking method in the embodiment of the present application, and are not used to limit the implementation of the tracking method in the present application.

图2示出了本申请实施例追踪方法的实现流程示意图。FIG. 2 shows a schematic flowchart of the implementation of the tracking method according to the embodiment of the present application.

参考图2,本申请实施例追踪方法,至少包括如下操作流程:操作201,获得第一对象相对于待控制区域的位置信息以及第一对象的视线参数,待控制区域至少包括电子设备用于输出显示内容的显示区域;操作202,将位置信息和视线参数输入目标追踪模型,以确定第一对象在待控制区域中的注视位置;其中,目标追踪模型包括基于传感器模组与显示区域的变化信息对原始追踪模型进行处理得到的第一追踪模型,传感器模组用于采集位置信息和/或视线参数。Referring to FIG. 2 , the tracking method according to the embodiment of the present application includes at least the following operation flow: In operation 201, the position information of the first object relative to the to-be-controlled area and the line-of-sight parameters of the first object are obtained, and the to-be-controlled area at least includes an electronic device for outputting The display area of the display content; operation 202, input the position information and the sight line parameter into the target tracking model to determine the gaze position of the first object in the area to be controlled; wherein, the target tracking model includes the change information based on the sensor module and the display area The first tracking model obtained by processing the original tracking model, and the sensor module is used to collect position information and/or line of sight parameters.

在操作201,获得第一对象相对于待控制区域的位置信息以及第一对象的视线参数,待控制区域至少包括电子设备用于输出显示内容的显示区域。In operation 201, position information of a first object relative to a to-be-controlled area and line-of-sight parameters of the first object are obtained, where the to-be-controlled area at least includes a display area used by the electronic device to output display content.

在本申请这一实施方式中,待控制区域可以是电子设备的显示屏幕的所有区域,也可以仅包括显示屏幕中当前用于输出显示内容的显示区域,还可以是电子设备投射的投影区域。In this embodiment of the present application, the to-be-controlled area may be all areas of the display screen of the electronic device, or may only include the display area currently used to output display content in the display screen, or may be the projection area projected by the electronic device.

在本申请这一实施方式中,位置信息可以包括:第一对象相对于待控制区域垂直相对距离、以水平地面作为参考面的情况下第一对象的眼部高度与显示区域的相对高度比、第一对象的运动轨迹或运动趋势等。视线参数可以包括:以水平地面作为参考面的情况下第一对象的视线高度、第一对象注视待控制区域时根据眼球确定的视线角度和第一对象的眼球保持静止的视线时长等。In this embodiment of the present application, the position information may include: the vertical relative distance of the first object relative to the area to be controlled, the relative height ratio of the eye height of the first object to the display area when the horizontal ground is used as the reference plane, The movement track or movement trend of the first object, etc. The line-of-sight parameters may include the line-of-sight height of the first object when the horizontal ground is used as the reference plane, the line-of-sight angle determined according to the eyeballs when the first object is gazing at the area to be controlled, and the line-of-sight duration for which the eyeballs of the first object remain stationary.

在本申请这一实施方式中,第一对象相对于待控制区域的位置信息以及第一对象的视线参数可以通过传感器模组获取。传感器模组可以包括第一模组和第二模组。相应的,在获得第一对象相对于待控制区域的位置信息以及第一对象的视线参数的过程中,如果第一对象超出第一模组的采集范围,可以输出提示信息。例如:用于提醒采用第二模组进行视线追踪的提醒“请佩戴AR眼镜”或“请于信息采集桌旁,通过桌面传感器注视屏幕”等。还可以直接控制第二模组获得位置信息以及视线参数。同样的还可以在输出提示信息的同时控制第二模组获得位置信息以及视线参数。In this embodiment of the present application, the position information of the first object relative to the area to be controlled and the line-of-sight parameters of the first object can be acquired through the sensor module. The sensor module may include a first module and a second module. Correspondingly, in the process of obtaining the position information of the first object relative to the area to be controlled and the line-of-sight parameters of the first object, if the first object exceeds the collection range of the first module, prompt information may be output. For example: "Please wear AR glasses" or "Please be at the information collection table and look at the screen through the desktop sensor" for reminding the use of the second module for eye tracking. It is also possible to directly control the second module to obtain position information and line-of-sight parameters. Similarly, the second module can be controlled to obtain position information and line of sight parameters while outputting prompt information.

举例说明,第一模组可以为固定的图像采集设备。第二模组可以是移动的具有人眼追踪传感器的眼镜或桌面传感器等。其中,第二模组的第二数据采集范围大于第一模组等第一数据采集范围。由此,对于一些第一对象位于距离显示区域较远的位置的极端情况,例如:第一对象距离显示区域的距离超出十米以上,可以通过配置具有人眼追踪传感器的眼镜或桌面传感器等外设,有效增加人眼追踪的准确度,进一步提升追踪的精确度。For example, the first module may be a fixed image acquisition device. The second module can be mobile glasses with eye tracking sensors or desktop sensors or the like. Wherein, the second data collection range of the second module is larger than the first data collection range of the first module and the like. Therefore, for some extreme cases where the first object is located far from the display area, for example, the distance between the first object and the display area is more than ten meters, you can configure glasses with eye-tracking sensors or desktop sensors, etc. It can effectively increase the accuracy of human eye tracking and further improve the accuracy of tracking.

在其他实施例中,所述第一模组和第二模组还可以是配置不同的摄像头模组,如第一模组为定焦或短焦的摄像头或摄像头阵列,第二模组为变焦或长焦的摄像头或摄像头阵列,甚至所述第二模组可以是可活动的摄像头或摄像头阵列,以与所述第一模组进行区分。In other embodiments, the first module and the second module may also be camera modules with different configurations, for example, the first module is a fixed-focus or short-focus camera or camera array, and the second module is a zoom Or telephoto camera or camera array, and even the second module can be a movable camera or camera array to distinguish it from the first module.

在操作202,将位置信息和视线参数输入目标追踪模型,以确定第一对象在待控制区域中的注视位置,其中,目标追踪模型包括基于传感器模组与显示区域的变化信息对原始追踪模型进行处理得到的第一追踪模型,传感器模组用于采集位置信息和/或视线参数。In operation 202, the position information and the line-of-sight parameters are input into the target tracking model to determine the gaze position of the first object in the area to be controlled, wherein the target tracking model includes performing an algorithm on the original tracking model based on the change information of the sensor module and the display area. After processing the obtained first tracking model, the sensor module is used to collect position information and/or line of sight parameters.

在本申请这一实施方式中,原始追踪模型的训练过程可以包括以下操作:获取待控制区域的区域信息,区域信息至少包括待控制区域的尺寸和/或形状,并获得校准对象与待控制区域之间的距离、以及校准对象注视待控制区域内多个指定位置的视线参数,视线参数包括视线角度和视线高度,从而基于区域信息、距离和视线参数构建三维空间坐标系,作为原始追踪模型。其中,校准对象可以为第一对象或不同于第一对象的第二对象。In this embodiment of the present application, the training process of the original tracking model may include the following operations: acquiring area information of the area to be controlled, the area information at least including the size and/or shape of the area to be controlled, and obtaining the calibration object and the area to be controlled The distance between them, and the sight line parameters of the calibration object looking at multiple specified positions in the area to be controlled, the sight line parameters include sight line angle and sight line height, so as to construct a three-dimensional space coordinate system based on the area information, distance and sight line parameters, as the original tracking model. The calibration object may be a first object or a second object different from the first object.

举例说明,当用户首次使用追踪系统时,用户可以在传感器模组的工作范围内的任意位置,面对待控制区域进行校正操作,构建并训练原始追踪模型。首次使用指的是第一次使用追踪系统的情况。传感器模组的工作范围内的任意位置是指传感器模组中眼球追踪传感器和图像采集装置等可以侦测到的范围。For example, when the user uses the tracking system for the first time, the user can perform correction operations at any position within the working range of the sensor module, facing the area to be controlled, and construct and train the original tracking model. First-time use refers to the situation where the tracking system is used for the first time. Any position within the working range of the sensor module refers to the range that can be detected by the eye tracking sensor and the image acquisition device in the sensor module.

结合图1所示,追踪系统可以依次在显示区域的四个角落显示定位坐标,例如:图1中显示区域的四个角落中“+”状的标识位置。第一对象可以根据定位坐标的显示顺序依次注视每个定位坐标,第一对象或校准图像在图1和下文图3中一个“圆形在上,去圆角矩形在下”的图标示出。第一对象在传感器模组的工作范围内的固定位置注视任意一个定位坐标的情况下,追踪系统中的传感器模组可以获取待控制区域的区域信息,区域信息至少包括待控制区域的尺寸和/或形状,并获得校准对象与待控制区域之间的距离、以及校准对象注视待控制区域内多个指定位置的视线参数,视线参数包括视线角度和视线高度。With reference to FIG. 1 , the tracking system can sequentially display the positioning coordinates in the four corners of the display area, for example, the “+”-shaped identification positions in the four corners of the display area in FIG. 1 . The first object may look at each positioning coordinate in turn according to the display order of the positioning coordinates. The first object or the calibration image is shown in FIG. 1 and the following FIG. When the first object looks at any positioning coordinate at a fixed position within the working range of the sensor module, the sensor module in the tracking system can obtain the area information of the area to be controlled, and the area information at least includes the size of the area to be controlled and/or or shape, and obtain the distance between the calibration object and the area to be controlled, and the sight line parameters of the calibration object looking at multiple designated positions in the area to be controlled, where the sight line parameters include sight angle and sight height.

具体的,对于待控制区域的区域信息,可以通过传感器模组中的图像采集装置即可获取。区域信息可以包括待控制区域中多个指定位置之间的相对位置关系或/或尺寸或/或形状。图1中以矩形示出了待控制区域,这里,待控制区域与显示区域位置和大小相同。实际应用过程中,待控制区域可以是矩形,相应的待控制区域的尺寸可以包括矩形的长和宽。待控制区域还可以是圆形,相应的待控制区域的尺寸可以包括圆形的圆心位置和半径。同样,待控制区域还可以是椭圆、半圆、扇形或两个以上图形的组合图形等。并且,还可以将电子设备的屏幕的部分区域的全部区域确定为待控制区域,可以进行视线追踪,而仅将矩形屏幕区域的右侧3/4的区域配置为显示区域。以图1作为参考,假设校准对象图1所示的固定位置注视定位坐标1,对于校准对象与待控制区域之间的距离传感器模组可以采集校准对象在该固定位置与传感器模组之间的第一距离,以及传感器模组距离显示区域之间的第二距离。如果传感器模组和校准对象在显示区域所在平面的同一侧,则校准对象与待控制区域之间的距离为第一距离与第二距离之和,如果传感器模组和校准对象在显示区域所在平面的不同侧,则校准对象与待控制区域之间的距离为第一距离减去第二距离。校准对象注视定位坐标1的视线参数可以包括视线角度和视线高度。视线角度也称作眼球偏移角度,可以通过传感器模组中的眼球传感器直接获取。视线高度是指校准对象在固定位置注视定位坐标1时眼球距离地面的垂直距离。同样可以采用模组传感器直接获取。Specifically, the area information of the area to be controlled can be acquired through the image acquisition device in the sensor module. The area information may include relative positional relationship or/or size or/or shape among multiple designated locations in the area to be controlled. The area to be controlled is shown in a rectangle in FIG. 1 . Here, the area to be controlled is the same in position and size as the display area. In practical application, the area to be controlled may be a rectangle, and the size of the corresponding area to be controlled may include the length and width of the rectangle. The area to be controlled may also be a circle, and the corresponding size of the area to be controlled may include a center position and a radius of the circle. Similarly, the area to be controlled may also be an ellipse, a semicircle, a sector, or a combination of two or more graphics, and the like. In addition, the entire area of the partial area of the screen of the electronic device can also be determined as the to-be-controlled area, gaze tracking can be performed, and only the right 3/4 area of the rectangular screen area is configured as the display area. Taking Figure 1 as a reference, assuming that the fixed position of the calibration object shown in Figure 1 looks at the positioning coordinate 1, for the distance between the calibration object and the area to be controlled, the sensor module can collect the distance between the calibration object and the sensor module between the fixed position and the sensor module. The first distance, and the second distance between the sensor module and the display area. If the sensor module and the calibration object are on the same side of the plane where the display area is located, the distance between the calibration object and the area to be controlled is the sum of the first distance and the second distance. If the sensor module and the calibration object are on the plane where the display area is located The distance between the calibration object and the area to be controlled is the first distance minus the second distance. The line-of-sight parameters of the calibration object's gaze positioning coordinate 1 may include a line-of-sight angle and a line-of-sight height. The sight angle is also called the eyeball offset angle, which can be directly obtained by the eyeball sensor in the sensor module. Gaze height refers to the vertical distance between the eyeball and the ground when the calibration object looks at the positioning coordinate 1 at a fixed position. It can also be obtained directly by the module sensor.

进一步,校准对象在固定位置注视定位坐标2、3和4的情况下,采用同样的方法可以获得相应的视线参数。由此,基于校准对象与待控制区域之间的距离、以及校准对象注视待控制区域内多个指定位置的视线参数构建三维空间坐标系,即可作为原始追踪模型。Further, when the calibration object looks at the positioning coordinates 2, 3 and 4 at a fixed position, the corresponding line of sight parameters can be obtained by using the same method. Thus, a three-dimensional space coordinate system is constructed based on the distance between the calibration object and the area to be controlled, and the line of sight parameters of the calibration object looking at multiple designated positions in the area to be controlled, which can be used as the original tracking model.

图3示出了本申请实施例追踪方法的显示区域和对象的空间对应关系,这里结合图3,对三维空间坐标系进行说明。三维空间坐标系能够示出校准对象在任一位置注视待控制区域的情况下校准对象眼球所在位置的三维坐标位置(x,y,z),以及校准对象在该坐标为(x,y,z)的位置注视待控制区域的注视位置与校准对象的眼球角度的映射。由此,在传感器模组和待控制区域的相对位置不发生变化的情况下,只要能够获取第一对象的眼球的三维坐标位置(x1,y1,z1),以及第一对象当前的眼球角度,便可以根据用于表示原始追踪模型的三维空间坐标系中校准对象在该坐标为(x1,y1,z1)的位置注视待控制区域的注视位置与校准对象的眼球角度的映射,确定第一对象当前所注视的待控制区域的注视位置。FIG. 3 shows the spatial correspondence between the display area and the object of the tracking method according to the embodiment of the present application. Here, the three-dimensional space coordinate system is described with reference to FIG. 3 . The three-dimensional space coordinate system can show the three-dimensional coordinate position (x, y, z) of the eyeball position of the calibration object when the calibration object is looking at the area to be controlled at any position, and the calibration object at this coordinate is (x, y, z) The mapping between the gaze position of the area to be controlled and the eye angle of the calibration object. Therefore, under the condition that the relative position of the sensor module and the area to be controlled does not change, as long as the three-dimensional coordinate position (x1, y1, z1) of the eyeball of the first object and the current eyeball angle of the first object can be obtained, Then, the first object can be determined according to the mapping between the gaze position of the calibration object in the three-dimensional space coordinate system used to represent the original tracking model and the gaze position of the area to be controlled and the eye angle of the calibration object at the position where the coordinates are (x1, y1, z1). The gaze position of the currently gazed area to be controlled.

需要说明的是,第一对象的视线参数还可以包括第一对象保持注视当前位置的注视时间,注视时间的时长可以根据实际需要确定,例如设定为1s、3s或3.5s等,设定注视时间是为了保证追踪方法的稳定性,避免第一对象扫视待控制区域过程中短暂停留即确定注视位置,并有效降低追踪方法的计算量。It should be noted that the line-of-sight parameters of the first object may also include the fixation time during which the first object keeps looking at the current position, and the fixation time duration may be determined according to actual needs, for example, set to 1s, 3s, or 3.5s, etc. The time is to ensure the stability of the tracking method, avoid the first object staying for a short time in the process of scanning the area to be controlled, and then determine the gaze position, and effectively reduce the calculation amount of the tracking method.

在本申请这一实施方式中,将第一位置信息和视线参数输入目标追踪模型,过程中,可以首先获得传感器模组与显示区域的变化信息。如果变化信息符合调整条件,将位置信息和视线参数输入第一追踪模型。如果变化信息不符合调整条件,将位置信息和视线参数输入原始追踪模型,原始追踪模型是基于第一对象或不同于第一对象的第二对象训练的模型。In this embodiment of the present application, the first position information and line of sight parameters are input into the target tracking model, and during the process, the change information of the sensor module and the display area can be obtained first. If the change information meets the adjustment conditions, the position information and the line of sight parameters are input into the first tracking model. If the change information does not meet the adjustment conditions, the position information and the line of sight parameters are input into the original tracking model, which is a model trained based on the first object or a second object different from the first object.

在本申请这一实施方式中,其中,如果显示区域、传感器模组、或、传感器模组与显示区域之间的相对位置关系中的至少之一的变化量不小于第一设定值,确定变化信息符合调整条件;且/或,如果显示区域、传感器模组、以及传感器模组与显示区域之前的相对位置关系中的任意之一无变化或至少之一的变化量小于第二设定值,确定变化信息不符合调整条件。In this embodiment of the present application, wherein, if the variation of at least one of the display area, the sensor module, or the relative positional relationship between the sensor module and the display area is not less than the first set value, determine The change information meets the adjustment conditions; and/or, if any one of the display area, the sensor module, and the relative positional relationship between the sensor module and the display area is unchanged or at least one of the changes is less than the second set value , determine that the change information does not meet the adjustment conditions.

需要说明的是,这里变化量的第一设定值可以是0,也即只要有变化即就判定为变化信息符合调整条件。It should be noted that the first set value of the change amount here may be 0, that is, as long as there is a change, it is determined that the change information meets the adjustment condition.

具体的,可以在显示区域发生变化,并且变化的幅度超过相应的设定阈值的情况下,判定为变化信息符合调整条件,例如:显示区域的尺寸或形状等发生变化。Specifically, when the display area changes and the magnitude of the change exceeds a corresponding set threshold, it may be determined that the change information meets the adjustment conditions, for example, the size or shape of the display area changes.

也可以在传感器模组与显示区域之间的相对位置发生变化,并且变化的幅度超过相应的设定阈值的情况下,判定为变化信息符合调整条件,例如:显示区域与传感器模组之间的角度、相对位置或传感器模组距离显示区域的垂直距离等发生变化。It can also be determined that the change information meets the adjustment conditions when the relative position between the sensor module and the display area changes, and the magnitude of the change exceeds the corresponding set threshold. The angle, relative position, or the vertical distance of the sensor module from the display area, etc. change.

还可以在传感器模组本身发生变化的情况下,判定为变化信息符合调整条件,例如:传感器模组更换类型、升级固件、升级配置等。It can also be determined that the change information meets the adjustment conditions when the sensor module itself changes, such as: sensor module replacement type, firmware upgrade, configuration upgrade, and the like.

相反的,如果显示区域和传感器模组的位置和固件等所有配置均为发生变化,则不符合调整条件。Conversely, if all configurations, such as the position and firmware of the display area and sensor module, are changed, the adjustment conditions are not met.

在本申请这一实施例中,还可以设定在显示区域和传感器模组之间的相对位置未发生变化或变化在预置范围内的情况下,判定为不符合调整条件。In this embodiment of the present application, it can also be set that the adjustment condition is not met when the relative position between the display area and the sensor module does not change or changes within a preset range.

在本申请这一实施方式中,可以采用如下操作基于传感器模组与显示区域的变化信息对原始追踪模型进行处理:基于变化信息确定第一对象相对于待控制区域的位置变化信息以及第一对象的视线变化参数,基于位置变化信息和视线变化参数对原始追踪模型进行调参,得到第一追踪模型。In this embodiment of the present application, the following operations can be used to process the original tracking model based on the change information of the sensor module and the display area: determining the position change information of the first object relative to the area to be controlled and the first object based on the change information The original tracking model is adjusted based on the position change information and the line of sight change parameters to obtain the first tracking model.

举例说明,结合图3,如果传感器模组距离显示区域的垂直距离不变,传感器模组距离地面的高度不变,传感器模组相对于显示区域的相对位置仅在X轴方向发生变化,则可以Y轴和Z轴信息不变,若传感器模组沿X轴正轴方向移动x1,则可以将原三维空间坐标系中所有利用三维坐标表示的位置X轴坐标的值减去x1。对于传感器模组相对于显示区域的位置在Y轴或Z轴上发生变化的情况,可以采取相应的原理,基于位置变化信息和视线变化参数对原始追踪模型进行调参,得到第一追踪模型。For example, with reference to Figure 3, if the vertical distance between the sensor module and the display area is unchanged, the height of the sensor module from the ground is unchanged, and the relative position of the sensor module relative to the display area only changes in the X-axis direction, then you can The information of Y-axis and Z-axis remains unchanged. If the sensor module moves x1 along the positive axis of X-axis, x1 can be subtracted from the X-axis coordinates of all positions represented by three-dimensional coordinates in the original three-dimensional space coordinate system. For the situation where the position of the sensor module relative to the display area changes on the Y-axis or Z-axis, the corresponding principle can be adopted to adjust the parameters of the original tracking model based on the position change information and the line-of-sight change parameters to obtain the first tracking model.

在本申请这一实施方式中,可以采用如下操作确定第一对象在待控制区域中的注视位置:确定第一对象的眼球相对于显示区域的垂直距离;确定第一对象当前的视线与显示区域的夹角以及第一对象当前的视线高度;根据夹角、视线高度和垂直距离,确定视线在显示区域上的交点,作为注视位置。In this embodiment of the present application, the following operations can be used to determine the gaze position of the first object in the area to be controlled: determine the vertical distance of the eyeball of the first object relative to the display area; determine the current line of sight of the first object and the display area and the current sight height of the first object; according to the included angle, sight height and vertical distance, determine the intersection point of the sight line on the display area as the gaze position.

举例说明,再次结合图3,根据第一对象的眼球相对于显示区域的垂直距离,可以确定第一对象的眼球在三维空间坐标系中Y轴信息。并且,确定第一对象的眼球相对于显示区域的垂直距离过程中,可以确定第一对象的眼球到显示区域的垂直连接线,进一步由此确定第一对象的眼球在三维空间中的X轴信息。根据第一对象当前的视线高度可以确定第一对象的眼球在三维空间坐标系中Z轴信息。根据传感器模组中眼球偏移角度传感器的数据可以确定第一对象当前的视线与显示区域的夹角。由此,可以将第一对象在待控制区域中的注视位置的确定问题转换为纯粹的数学三角函数的问题进行求解。For example, referring to FIG. 3 again, according to the vertical distance of the eyeball of the first object relative to the display area, the Y-axis information of the eyeball of the first object in the three-dimensional space coordinate system can be determined. In addition, in the process of determining the vertical distance of the eyeball of the first object relative to the display area, the vertical connection line between the eyeball of the first object and the display area can be determined, and further the X-axis information of the eyeball of the first object in the three-dimensional space can be determined accordingly. . The Z-axis information of the eyeball of the first object in the three-dimensional space coordinate system can be determined according to the current height of sight of the first object. The angle between the current line of sight of the first object and the display area can be determined according to the data of the eyeball shift angle sensor in the sensor module. Therefore, the problem of determining the gaze position of the first object in the area to be controlled can be converted into a problem of pure mathematical trigonometric functions for solving.

在本申请这一实施方式中,还可以获得多个对象相对于待控制区域的位置信息以及多个对象的视线参数,并基于第一对象在待控制区域中的注视位置、第一对象与多个对象中的第二对象的位置关系、以及第二对象的视线参数确定第二对象在待控制区域中的注视位置,其中,多个对象中包括第一对象。In this embodiment of the present application, position information of multiple objects relative to the to-be-controlled area and line-of-sight parameters of multiple objects can also be obtained, and based on the gaze position of the first object in the to-be-controlled area, the relationship between the first object and the multiple objects The positional relationship of the second object among the plurality of objects and the line-of-sight parameter of the second object determine the gaze position of the second object in the area to be controlled, wherein the multiple objects include the first object.

举例说明,若同时存在多个对象注视同一待控制区域中的相同或不同位置,则需要确定每一对象的注视位置,此时,可以根据对象注视待控制区域的先后顺序和注视时间,来对多个对象的视线进行时序追踪。还可以多线程同步对多个对象的视线进行实时追踪。在此过程中,便可以根据多个对象的标识,确定对象之间的位置关系,在对对象进行时序的视线追踪过程中,可以利用前一时刻的第一对象的视线参数和注视位置,以及第一对象与第二对象的位置关系和当前第二对象的注视参数来确定当前第二对象在待控制区域的注视位置。For example, if there are multiple objects looking at the same or different positions in the same area to be controlled at the same time, the gaze position of each object needs to be determined. Time-series tracking of sight lines of multiple objects. It is also possible to perform real-time tracking of the sight lines of multiple objects synchronously with multiple threads. In this process, the positional relationship between the objects can be determined according to the identifiers of multiple objects, and in the process of time-series line-of-sight tracking of the objects, the line-of-sight parameters and gaze position of the first object at the previous moment can be used, and The positional relationship between the first object and the second object and the current gaze parameter of the second object determine the current gaze position of the second object in the area to be controlled.

在本申请这一实施方式中,还获得第一对象在当前时刻相对于待控制区域的第一位置信息以及第一对象的第一视线参数,并获得第一对象的运动信息和视线变化信息,由此,基于运动信息和视线变化信息预测第一对象在目标时刻的第二视线参数及其相对于待控制区域的第二位置信息,进一步,基于第二视线参数和第二位置信息确定第一对象在待控制区域的注视位置。In this embodiment of the present application, the first position information of the first object relative to the area to be controlled at the current moment and the first sight line parameter of the first object are also obtained, and the motion information and sight line change information of the first object are obtained, Thus, the second line of sight parameter of the first object at the target time and its second position information relative to the area to be controlled are predicted based on the motion information and the line of sight change information, and further, the first object is determined based on the second line of sight parameter and the second position information. The gaze position of the object in the area to be controlled.

举例说明,如果第一对象的视线参数未发生变化,但是第一对象仅相对于显示区域在X轴正轴方向移动,则第一对象的注视位置同样的随着第一对象相对于显示区域的移动信息,在X轴正轴方向发生移动。需要说明的是,这里仅仅是示例性说明,并不形成对实际应用中追踪方法的限定。For example, if the sight line parameter of the first object does not change, but the first object only moves relative to the display area in the direction of the positive X-axis, the gaze position of the first object also follows the direction of the first object relative to the display area. Movement information, movement occurs in the positive direction of the X-axis. It should be noted that this is only an exemplary description, and does not form a limitation on the tracking method in practical applications.

本申请实施例追踪方法及装置,获得第一对象相对于待控制区域的位置信息以及第一对象的视线参数,并将位置信息和视线参数输入目标追踪模型,以确定第一对象在待控制区域中的注视位置。其中,待控制区域至少包括电子设备用于输出显示内容的显示区域,并且目标追踪模型包括基于传感器模组与显示区域的变化信息对原始追踪模型进行处理得到的第一追踪模型,传感器模组用于采集位置信息和/或视线参数。由此,能够基于传感器模组与显示区域的变化信息对原始追踪模型进行处理得到的目标追踪模型,并使用该目标追踪模型对第一对象对注视位置进行追踪。有效避免了在传感器模组或电子设备的显示区域的位置或型号规格发生变化的情况下,必须将传感器模组和电子设备进行复杂的联合调试的情况。传感器模组可以单独独立部署,无需固定在屏幕的特定位置。使得用户对电子设备的追踪功能的使用不再局限于距离显示区域较近的距离,有效拓宽了用户对电子设备的追踪功能的使用位置和视角。The tracking method and device according to the embodiments of the present application obtain the position information of the first object relative to the to-be-controlled area and the line-of-sight parameters of the first object, and input the position information and line-of-sight parameters into the target tracking model to determine that the first object is in the to-be-controlled area gaze position in . The area to be controlled at least includes a display area used by the electronic device to output display content, and the target tracking model includes a first tracking model obtained by processing the original tracking model based on the change information between the sensor module and the display area. The sensor module uses for collecting location information and/or line-of-sight parameters. In this way, a target tracking model obtained by processing the original tracking model based on the change information of the sensor module and the display area can be used to track the gaze position of the first object by using the target tracking model. This effectively avoids the complex joint debugging of the sensor module and the electronic device when the position or model specification of the sensor module or the display area of the electronic device is changed. Sensor modules can be deployed independently and do not need to be fixed to a specific location on the screen. The use of the tracking function of the electronic device by the user is no longer limited to a short distance from the display area, effectively broadening the user's use position and viewing angle of the tracking function of the electronic device.

同理,基于上文追踪方法,本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质存储有程序,当程序被处理器执行时,使得处理器至少执行如下的操作步骤:操作201,获得第一对象相对于待控制区域的位置信息以及第一对象的视线参数,待控制区域至少包括电子设备用于输出显示内容的显示区域;操作202,将位置信息和视线参数输入目标追踪模型,以确定第一对象在待控制区域中的注视位置;其中,目标追踪模型包括基于传感器模组与显示区域的变化信息对原始追踪模型进行处理得到的第一追踪模型,传感器模组用于采集位置信息和/或视线参数。Similarly, based on the above tracking method, an embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a program, and when the program is executed by the processor, the processor is caused to perform at least the following operation steps: In operation 201, the position information of the first object relative to the area to be controlled and the line-of-sight parameters of the first object are obtained, and the area to be controlled at least includes a display area used by the electronic device to output the display content; operation 202, the position information and the line-of-sight parameters are input into the target A tracking model to determine the gaze position of the first object in the area to be controlled; wherein, the target tracking model includes a first tracking model obtained by processing the original tracking model based on the change information of the sensor module and the display area, and the sensor module uses for collecting location information and/or line-of-sight parameters.

进一步,基于如上文追踪方法,本申请实施例还提供一种追踪装置,如图4,该追踪装置40包括:获取模块401,用于获得第一对象相对于待控制区域的位置信息以及第一对象的视线参数,待控制区域至少包括电子设备用于输出显示内容的显示区域;追踪模块402,用于将位置信息和视线参数输入目标追踪模型,以确定第一对象在待控制区域中的注视位置;其中,目标追踪模型包括基于传感器模组与显示区域的变化信息对原始追踪模型进行处理得到的第一追踪模型,传感器模组用于采集位置信息和/或视线参数。Further, based on the above tracking method, an embodiment of the present application further provides a tracking device, as shown in FIG. 4 , the tracking device 40 includes: an obtaining module 401 for obtaining the position information of the first object relative to the to-be-controlled area and the first The sight line parameter of the object, the area to be controlled at least includes the display area used by the electronic device to output the display content; the tracking module 402 is used to input the position information and the sight line parameter into the target tracking model to determine the gaze of the first object in the area to be controlled position; wherein, the target tracking model includes a first tracking model obtained by processing the original tracking model based on the change information of the sensor module and the display area, and the sensor module is used to collect position information and/or line-of-sight parameters.

在本申请这一实施方式中,其中,追踪模块402包括:变化获取子模块,用于获得传感器模组与显示区域的变化信息;输入子模块,用于如果变化信息符合调整条件,将位置信息和视线参数输入第一追踪模型;且/或,如果变化信息不符合调整条件,将位置信息和视线参数输入原始追踪模型,原始追踪模型是基于第一对象或不同于第一对象的第二对象训练的模型。In this embodiment of the present application, the tracking module 402 includes: a change acquisition sub-module for obtaining the change information of the sensor module and the display area; an input sub-module for obtaining the position information if the change information meets the adjustment conditions and line-of-sight parameters into the first tracking model; and/or, if the change information does not meet the adjustment conditions, input the position information and line-of-sight parameters into the original tracking model based on the first object or a second object different from the first object trained model.

在本申请这一实施方式中,其中,输入子模块根据如下操作判断变化信息是否符合调整条件:如果显示区域、传感器模组、或、传感器模组与显示区域之间的相对位置关系中的至少之一的变化量不小于第一设定值,确定变化信息符合调整条件;且/或,如果显示区域、传感器模组、以及传感器模组与显示区域之前的相对位置关系中的任意之一无变化或至少之一的变化量小于第二设定值,确定变化信息不符合调整条件。In this embodiment of the present application, the input sub-module determines whether the change information meets the adjustment conditions according to the following operations: if at least one of the relative positional relationship between the display area, the sensor module, or the sensor module and the display area The amount of change of one of them is not less than the first set value, and it is determined that the change information meets the adjustment conditions; and/or, if any one of the display area, the sensor module, and the relative positional relationship between the sensor module and the display area is not If the amount of change or at least one of the changes is smaller than the second set value, it is determined that the change information does not meet the adjustment condition.

在本申请这一实施方式中,原始追踪模型的训练过程包括:获取待控制区域的区域信息,区域信息至少包括待控制区域的尺寸和/或形状;获得校准对象与待控制区域之间的距离、以及校准对象注视待控制区域内多个指定位置的视线参数,视线参数包括视线角度和视线高度;基于区域信息、距离和视线参数构建三维空间坐标系,作为原始追踪模型;其中,校准对象为第一对象或不同于第一对象的第二对象。In this embodiment of the present application, the training process of the original tracking model includes: acquiring area information of the area to be controlled, where the area information at least includes the size and/or shape of the area to be controlled; obtaining the distance between the calibration object and the area to be controlled , and the sight parameters of the calibration object looking at multiple specified positions in the area to be controlled, the sight parameters include sight angle and sight height; based on the area information, distance and sight parameters, a three-dimensional space coordinate system is constructed as the original tracking model; among them, the calibration object is A first object or a second object different from the first object.

在本申请这一实施方式中,追踪模块402包括:距离确定子模块,用于确定第一对象的眼球相对于显示区域的垂直距离;高度确定子模块,用于确定第一对象当前的视线与显示区域的夹角以及第一对象当前的视线高度;注视位置确定子模块,用于根据夹角、视线高度和垂直距离,确定视线在显示区域上的交点,作为注视位置。In this embodiment of the present application, the tracking module 402 includes: a distance determination sub-module for determining the vertical distance of the eyeball of the first object relative to the display area; a height determination sub-module for determining the current line of sight of the first object and the The included angle of the display area and the current sight height of the first object; the gaze position determination submodule is used to determine the intersection point of the sight line on the display area as the gaze position according to the included angle, the sight height and the vertical distance.

在本申请这一实施方式中,其中,基于传感器模组与显示区域的变化信息对原始追踪模型进行处理,包括:基于变化信息确定第一对象相对于待控制区域的位置变化信息以及第一对象的视线变化参数;基于位置变化信息和视线变化参数对原始追踪模型进行调参,得到第一追踪模型。In this embodiment of the present application, the processing of the original tracking model based on the change information of the sensor module and the display area includes: determining the position change information of the first object relative to the area to be controlled and the first object based on the change information The original tracking model is adjusted based on the position change information and the line of sight change parameters to obtain the first tracking model.

在本申请这一实施方式中,传感器模组包括第一模组和第二模组;相应的,获取模块401,包括:第一获取子模块,用于如果第一对象超出第一模组的采集范围,执行:输出提示信息;和/或,控制第二模组获得位置信息以及视线参数。In this embodiment of the present application, the sensor module includes a first module and a second module; correspondingly, the acquisition module 401 includes: a first acquisition sub-module for if the first object exceeds the size of the first module Collect the range, and execute: output prompt information; and/or, control the second module to obtain position information and line-of-sight parameters.

在本申请这一实施方式中,追踪装置40还包括:多对象采集模块,用于获得多个对象相对于待控制区域的位置信息以及多个对象的视线参数,多个对象中包括第一对象;相对追踪模块,用于基于第一对象在待控制区域中的注视位置、第一对象与多个对象中的第二对象的位置关系、以及第二对象的视线参数确定第二对象在待控制区域中的注视位置。In this embodiment of the present application, the tracking device 40 further includes: a multi-object acquisition module, configured to obtain position information of the plurality of objects relative to the area to be controlled and line-of-sight parameters of the plurality of objects, wherein the plurality of objects includes the first object A relative tracking module for determining the position of the second object to be controlled based on the gaze position of the first object in the area to be controlled, the positional relationship between the first object and the second object in the plurality of objects, and the line of sight parameters of the second object Gaze position in the area.

在本申请这一实施方式中,追踪装置40还包括:实时采集模块,用于获得第一对象在当前时刻相对于待控制区域的第一位置信息以及第一对象的第一视线参数;运动采集模块,用于获得第一对象的运动信息和视线变化信息;位置确定模块,用于基于运动信息和视线变化信息预测第一对象在目标时刻的第二视线参数及其相对于待控制区域的第二位置信息;运动追踪模块,用于基于第二视线参数和第二位置信息确定第一对象在待控制区域的注视位置。In this embodiment of the present application, the tracking device 40 further includes: a real-time acquisition module for obtaining the first position information of the first object relative to the area to be controlled at the current moment and the first sight line parameter of the first object; motion acquisition The module is used to obtain the motion information and line-of-sight change information of the first object; the position determination module is used to predict the second line-of-sight parameter of the first object at the target moment based on the motion information and the line-of-sight change information and its first line of sight parameter relative to the area to be controlled. Two position information; a motion tracking module, configured to determine the gaze position of the first object in the area to be controlled based on the second sight line parameter and the second position information.

这里需要指出的是:以上对针对追踪装置实施例的描述,与前述图1至3所示的方法实施例的描述是类似的,具有同前述图1至3所示的方法实施例相似的有益效果,因此不做赘述。对于本申请追踪装置实施例中未披露的技术细节,请参照本申请前述图1至3所示的方法实施例的描述而理解,为节约篇幅,因此不再赘述。It should be pointed out here that the above description of the embodiment of the tracking device is similar to the description of the method embodiment shown in the foregoing FIGS. 1 to 3 , and has similar benefits to the method embodiment shown in the foregoing FIGS. effect, so I won't go into details. For technical details not disclosed in the embodiments of the tracking device of the present application, please refer to the descriptions of the method embodiments shown in FIG. 1 to FIG. 3 in the foregoing application for understanding, and to save space, details are therefore omitted.

需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be noted that, herein, the terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, article or device comprising a series of elements includes not only those elements, It also includes other elements not expressly listed or inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.

在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The device embodiments described above are only illustrative. For example, the division of units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or integrated. to another system, or some features can be ignored, or not implemented. In addition, the coupling, or direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms. of.

上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。The unit described above as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit; it may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may all be integrated into one processing unit, or each unit may be separately used as a unit, or two or more units may be integrated into one unit; the above integration The unit can be implemented either in the form of hardware or in the form of hardware plus software functional units.

本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。Those of ordinary skill in the art can understand that all or part of the steps of implementing the above method embodiments can be completed by program instructions related to hardware, the aforementioned program can be stored in a computer-readable storage medium, and when the program is executed, the execution includes: The steps of the above method embodiments; and the aforementioned storage medium includes: a removable storage device, a read only memory (Read Only Memory, ROM), a magnetic disk or an optical disk and other media that can store program codes.

或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本申请各个实施例方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。Alternatively, if the above-mentioned integrated units of the present application are implemented in the form of software function modules and sold or used as independent products, they may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or in the parts that make contributions to the prior art. The computer software products are stored in a storage medium and include several instructions for A computer device (which may be a personal computer, a server, or a network device, etc.) is caused to execute all or part of the methods of the various embodiments of the present application. The aforementioned storage medium includes various media that can store program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.

以上,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。The above are only specific embodiments of the present application, but the protection scope of the present application is not limited to this. Any person skilled in the art can easily think of changes or replacements within the technical scope disclosed in the present application, and should cover within the scope of protection of this application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of tracking, comprising:
obtaining position information of a first object relative to a to-be-controlled area and sight line parameters of the first object, wherein the to-be-controlled area at least comprises a display area used by electronic equipment for outputting display content;
inputting the position information and the sight line parameters into a target tracking model to determine a fixation position of the first object in the area to be controlled;
the target tracking model comprises a first tracking model obtained by processing an original tracking model based on change information of a sensor module and the display area, and the sensor module is used for acquiring the position information and/or the sight line parameters.
2. The method of claim 1, wherein inputting the first location information and the gaze parameters into a target tracking model comprises:
obtaining the change information of the sensor module and the display area;
if the change information meets the adjustment condition, inputting the position information and the sight line parameter into the first tracking model; and/or the presence of a gas in the atmosphere,
if the change information does not meet an adjustment condition, inputting the position information and the gaze parameter into the original tracking model, which is a model trained on the first object or a second object different from the first object.
3. The method of claim 2, wherein,
if the variation of at least one of the display area, the sensor module or the relative position relationship between the sensor module and the display area is not less than a first set value, determining that the variation information meets the adjustment condition; and/or the presence of a gas in the atmosphere,
and if any one of the display area, the sensor module and the relative position relation between the sensor module and the display area is not changed or the variation of at least one of the sensor module and the relative position relation is smaller than a second set value, determining that the change information does not accord with the adjustment condition.
4. The method according to claim 1 or 2, the training process of the original tracking model comprising:
acquiring area information of the area to be controlled, wherein the area information at least comprises the size and/or the shape of the area to be controlled;
obtaining the distance between a calibration object and the area to be controlled and sight line parameters of the calibration object gazing at a plurality of designated positions in the area to be controlled, wherein the sight line parameters comprise sight line angles and sight line heights;
constructing a three-dimensional space coordinate system based on the region information, the distance and the sight line parameters to serve as the original tracking model;
wherein the calibration object is the first object or a second object different from the first object.
5. The method of any of claims 1 to 3, wherein determining the gaze location of the first object in the area to be controlled comprises:
determining a vertical distance of an eyeball of a first object relative to the display area;
determining an included angle between the current sight line of the first object and the display area and the current sight line height of the first object;
and determining the intersection point of the sight line on the display area as the gaze position according to the included angle, the sight line height and the vertical distance.
6. The method of claim 1, wherein processing the raw tracking model based on the change information of the sensor module and the display area comprises:
determining position change information of the first object relative to the area to be controlled and a sight line change parameter of the first object based on the change information;
and adjusting parameters of the original tracking model based on the position change information and the sight line change parameters to obtain the first tracking model.
7. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
the sensor module comprises a first module and a second module; accordingly, the method can be used for solving the problems that,
the obtaining of the position information of the first object relative to the area to be controlled and the sight line parameter of the first object includes:
if the first object is beyond the acquisition range of the first module, executing the following steps:
outputting prompt information; and/or the presence of a gas in the gas,
and controlling a second module to obtain the position information and the sight line parameter.
8. The method of claim 1, further comprising:
obtaining position information of a plurality of objects relative to an area to be controlled and sight line parameters of the plurality of objects, wherein the plurality of objects comprise a first object;
determining a gaze location of the second object in the area to be controlled based on a gaze location of the first object in the area to be controlled, a positional relationship of the first object to a second object of the plurality of objects, and a line of sight parameter of the second object.
9. The method of claim 1, further comprising:
obtaining first position information of a first object relative to a region to be controlled at the current moment and a first sight line parameter of the first object;
obtaining motion information and gaze change information of the first object;
predicting a second sight line parameter of the first object at a target moment and second position information of the first object relative to the area to be controlled based on the motion information and the sight line change information;
and determining the fixation position of the first object in the area to be controlled based on the second sight line parameter and the second position information.
10. A tracking device, comprising:
the device comprises an acquisition module, a control module and a display module, wherein the acquisition module is used for acquiring position information of a first object relative to a to-be-controlled area and sight line parameters of the first object, and the to-be-controlled area at least comprises a display area used for outputting display content by electronic equipment;
the tracking module is used for inputting the position information and the sight line parameters into a target tracking model so as to determine the fixation position of the first object in the area to be controlled;
the target tracking model comprises a first tracking model obtained by processing an original tracking model based on change information of a sensor module and the display area, and the sensor module is used for acquiring the position information and/or the sight line parameters.
CN202210309192.9A 2022-03-25 2022-03-25 A tracking method and device Pending CN114740973A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210309192.9A CN114740973A (en) 2022-03-25 2022-03-25 A tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210309192.9A CN114740973A (en) 2022-03-25 2022-03-25 A tracking method and device

Publications (1)

Publication Number Publication Date
CN114740973A true CN114740973A (en) 2022-07-12

Family

ID=82277589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210309192.9A Pending CN114740973A (en) 2022-03-25 2022-03-25 A tracking method and device

Country Status (1)

Country Link
CN (1) CN114740973A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109725714A (en) * 2018-11-14 2019-05-07 北京七鑫易维信息技术有限公司 Sight determines method, apparatus, system and wear-type eye movement equipment
CN111580656A (en) * 2020-05-08 2020-08-25 安徽华米信息科技有限公司 Wearable device and control method and device thereof
CN111831119A (en) * 2020-07-10 2020-10-27 Oppo广东移动通信有限公司 Eye tracking method, device, storage medium and head mounted display device
CN111885372A (en) * 2020-06-10 2020-11-03 南京润景丰创信息技术有限公司 Automatic image target calibration system and method compatible with laser and live ammunition shooting
US20210026446A1 (en) * 2019-07-26 2021-01-28 Samsung Electronics Co., Ltd. Method and apparatus with gaze tracking

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109725714A (en) * 2018-11-14 2019-05-07 北京七鑫易维信息技术有限公司 Sight determines method, apparatus, system and wear-type eye movement equipment
US20210026446A1 (en) * 2019-07-26 2021-01-28 Samsung Electronics Co., Ltd. Method and apparatus with gaze tracking
CN111580656A (en) * 2020-05-08 2020-08-25 安徽华米信息科技有限公司 Wearable device and control method and device thereof
CN111885372A (en) * 2020-06-10 2020-11-03 南京润景丰创信息技术有限公司 Automatic image target calibration system and method compatible with laser and live ammunition shooting
CN111831119A (en) * 2020-07-10 2020-10-27 Oppo广东移动通信有限公司 Eye tracking method, device, storage medium and head mounted display device

Similar Documents

Publication Publication Date Title
US20240273934A1 (en) Object tracking assisted with hand or eye tracking
US7783077B2 (en) Eye gaze tracker system and method
CN108881724B (en) Image acquisition method, device, equipment and storage medium
Coutinho et al. Improving head movement tolerance of cross-ratio based eye trackers
US8933912B2 (en) Touch sensitive user interface with three dimensional input sensor
JP5689850B2 (en) Video analysis apparatus, video analysis method, and gaze point display system
CN109690553A (en) The system and method for executing eye gaze tracking
CN108369744B (en) 3D Gaze Detection via Binocular Homography Mapping
WO2015167941A1 (en) Gaze tracking calibration
KR20250049307A (en) Medical image overlays for augmented reality experiences
US9990739B1 (en) Method and device for fisheye camera automatic calibration
Lander et al. hEYEbrid: A hybrid approach for mobile calibration-free gaze estimation
CN117372475A (en) Eyeball tracking method and electronic equipment
CN110338750B (en) Eyeball tracking equipment
CN113253851B (en) Immersive flow field visualization man-machine interaction method based on eye movement tracking
Weigle et al. Analysis of eye-tracking experiments performed on a Tobii T60
WO2025112874A1 (en) Eye movement tracking control method
WO2025112875A1 (en) Eye movement tracking device
CN114385015A (en) Control method of virtual object and electronic equipment
CN114740973A (en) A tracking method and device
CN118714280A (en) Distortion calibration method, device, equipment and storage medium for near-eye display device
CN117590942A (en) Control method, device, equipment and storage medium of electronic equipment
KR102730600B1 (en) Apparatus for display control for eye tracking and method thereof
Covolan et al. Non-deterministic method for semi-automatic calibration of smartphone-based OST HMDs
SE2350088A1 (en) Systems and methods for head pose data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination