WO2023108711A1 - Method and apparatus for synchronously analyzing behavior information and pupil information, and device and medium - Google Patents

Method and apparatus for synchronously analyzing behavior information and pupil information, and device and medium Download PDF

Info

Publication number
WO2023108711A1
WO2023108711A1 PCT/CN2021/140013 CN2021140013W WO2023108711A1 WO 2023108711 A1 WO2023108711 A1 WO 2023108711A1 CN 2021140013 W CN2021140013 W CN 2021140013W WO 2023108711 A1 WO2023108711 A1 WO 2023108711A1
Authority
WO
WIPO (PCT)
Prior art keywords
pupil
video data
behavior
target object
information
Prior art date
Application number
PCT/CN2021/140013
Other languages
French (fr)
Chinese (zh)
Inventor
郭丰
张佳佳
蔚鹏飞
王立平
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Publication of WO2023108711A1 publication Critical patent/WO2023108711A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/231Hierarchical techniques, i.e. dividing or merging pattern sets so as to obtain a dendrogram
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

Disclosed herein are a method and apparatus for synchronously analyzing behavior information and pupil information, and a device and a medium. The method for synchronously analyzing behavior information and pupil information comprises: acquiring behavior video data and pupil video data of a target object within the same time period, wherein the behavior video data comprises video data acquired from at least four photographing angles; respectively analyzing the behavior video data and the pupil video data, so as to determine behavior information and pupil information of the target object; and establishing an association relationship between the behavior information and the pupil information according to the collection time of the behavior video data and the pupil video data.

Description

行为与瞳孔信息同步分析方法、装置、设备及介质Behavior and pupil information synchronous analysis method, device, equipment and medium
本申请要求在2021年12月14日提交中国专利局、申请号为202111530272.9的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。This application claims priority to a Chinese patent application with application number 202111530272.9 filed with the China Patent Office on December 14, 2021, the entire contents of which are incorporated herein by reference.
技术领域technical field
本申请涉及机器视觉技术领域,例如涉及一种行为与瞳孔信息同步分析方法、装置、设备及介质。The present application relates to the technical field of machine vision, for example, to a method, device, device and medium for synchronous analysis of behavior and pupil information.
背景技术Background technique
瞳孔的信息通常会与脑部的神经系统刺激相关联,表明人或动物体产生了一些特定的行为或中枢特定的环路被激活,例如,当我们害怕时,我们的中枢系统接收到了恐惧的信息并产生反馈信息,这会使得我们因害怕而发抖,进而产生想要逃跑的行为,与此同时瞳孔会扩张到平时的3倍左右。The information of the pupil is usually associated with the nervous system stimulation of the brain, indicating that the human or animal body produces some specific behaviors or the central specific circuit is activated. For example, when we are afraid, our central system receives the signal of fear. Information and generate feedback information, which will make us tremble with fear, and then have the behavior of wanting to flee, and at the same time, the pupils will dilate to about 3 times their usual size.
记录脑神经活动的方法,包括电生理、光遗传等技术手段,可以确定运动与脑神经活动的关联,可以通过对瞳孔信息与行为之间的关系,作为探索瞳孔信息与脑神经活动的桥梁。The method of recording brain nerve activity, including electrophysiological, optogenetic and other technical means, can determine the relationship between exercise and brain nerve activity, and can be used as a bridge to explore pupil information and brain nerve activity through the relationship between pupil information and behavior.
但是,并没有成熟的同步分析行为信息和瞳孔信息的方案。However, there is no mature scheme for synchronously analyzing behavioral information and pupil information.
发明内容Contents of the invention
本申请提供了一种行为与瞳孔信息同步分析方法、装置、设备及介质,以实现同步获取并分析目标对象的行为信息和瞳孔信息,将行为信息和瞳孔信息进行匹配,为脑科学以及神经机制环路等研究提供研究数据。This application provides a method, device, equipment, and medium for synchronous analysis of behavior and pupil information, so as to achieve synchronous acquisition and analysis of the behavior information and pupil information of the target object, and to match the behavior information and pupil information. Research such as Loop provides research data.
本申请实施例提供了一种行为与瞳孔信息同步分析方法,该方法包括:The embodiment of the present application provides a method for synchronously analyzing behavior and pupil information, the method comprising:
获取目标对象在相同时间段内的行为视频数据和瞳孔视频数据,其中,所述行为视频数据包括至少四个拍摄角度获取的视频数据;Acquiring behavior video data and pupil video data of the target object within the same time period, wherein the behavior video data includes video data obtained from at least four shooting angles;
对所述行为视频数据和所述瞳孔视频数据分别进行分析,确定所述目标对象的行为信息以及瞳孔信息;Analyzing the behavior video data and the pupil video data respectively to determine the behavior information and pupil information of the target object;
根据所述行为视频数据和所述瞳孔视频数据的采集时间,建立所述行为信息与所述瞳孔信息的关联关系。According to the collection time of the behavior video data and the pupil video data, an association relationship between the behavior information and the pupil information is established.
本申请还提供了一种行为与瞳孔信息同步分析装置,该装置包括:The present application also provides a device for synchronous analysis of behavior and pupil information, which device includes:
数据采集模块,设置为获取目标对象在相同时间段内的行为视频数据和瞳 孔视频数据,其中,所述行为视频数据包括至少四个拍摄角度获取的视频数据;The data acquisition module is configured to obtain behavioral video data and pupil video data of the target object within the same time period, wherein the behavioral video data includes video data acquired by at least four shooting angles;
数据分析模块,设置为对所述行为视频数据和所述瞳孔视频数据分别进行分析,确定所述目标对象的行为信息以及瞳孔信息;The data analysis module is configured to analyze the behavior video data and the pupil video data respectively, and determine the behavior information and pupil information of the target object;
数据匹配模块,设置为根据所述行为视频数据和所述瞳孔视频数据的采集时间,建立所述行为信息与所述瞳孔信息的关联关系。The data matching module is configured to establish an association relationship between the behavior information and the pupil information according to the collection time of the behavior video data and the pupil video data.
本申请还提供了一种计算机设备,所述计算机设备包括:The present application also provides a kind of computer equipment, and described computer equipment comprises:
一个或多个处理器;one or more processors;
存储器,设置为存储一个或多个程序;memory configured to store one or more programs;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现上述的行为与瞳孔信息同步分析方法。When the one or more programs are executed by the one or more processors, the one or more processors are made to implement the above method for synchronously analyzing behavior and pupil information.
本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述的行为与瞳孔信息同步分析方法。The present application also provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the above method for synchronous analysis of behavior and pupil information is realized.
附图说明Description of drawings
图1是本申请实施例一提供的一种行为与瞳孔信息同步分析方法的流程图;Fig. 1 is a flow chart of a method for synchronous analysis of behavior and pupil information provided in Embodiment 1 of the present application;
图2是本申请实施例一提供的一种目标对象的骨架关键点的示意图;FIG. 2 is a schematic diagram of key points of a skeleton of a target object provided in Embodiment 1 of the present application;
图3是本申请实施例一提供的一种目标对象的瞳孔关键点的示意图;Fig. 3 is a schematic diagram of pupil key points of a target object provided in Embodiment 1 of the present application;
图4是本申请实施例一提供的一种目标对象的眼球外周关键点的示意图;FIG. 4 is a schematic diagram of key points around the eyeball of a target object provided in Embodiment 1 of the present application;
图5是本申请实施例二提供的一种行为与瞳孔信息同步分析装置的结构示意图;Fig. 5 is a schematic structural diagram of a behavior and pupil information synchronous analysis device provided in Embodiment 2 of the present application;
图6是本申请实施例三提供的一种计算机设备的结构示意图。FIG. 6 is a schematic structural diagram of a computer device provided in Embodiment 3 of the present application.
具体实施方式Detailed ways
下面结合附图和实施例对本申请进行说明。此处所描述的具体实施例仅仅用于解释本申请。为了便于描述,附图中仅示出了与本申请相关的部分。The application will be described below in conjunction with the accompanying drawings and embodiments. The specific embodiments described herein are for illustration of the application only. For ease of description, only parts relevant to the present application are shown in the drawings.
实施例一Embodiment one
图1为本申请实施例一提供的一种行为与瞳孔信息同步分析方法的流程图,本实施例可适用于研究行为与脑神经活动的情况,例如适用于以小鼠为研究对象同步分析行为信息与瞳孔信息的场景。该方法可以由行为与瞳孔信息同步分析装置执行,该装置可以由软件和/或硬件的方式来实现,集成于具有应用开发功能的计算机设备中。Figure 1 is a flow chart of a method for synchronous analysis of behavior and pupil information provided in Embodiment 1 of the present application. This embodiment can be applied to the study of behavior and brain nerve activity, for example, it is suitable for synchronous analysis of behavior with mice as research objects The scene of information and pupil information. The method can be executed by a behavior and pupil information synchronous analysis device, which can be realized by software and/or hardware, and integrated in a computer device with application development function.
如图1所示,行为与瞳孔信息同步分析方法包括以下步骤:As shown in Figure 1, the behavior and pupil information synchronous analysis method includes the following steps:
S110、获取目标对象在相同时间段内的行为视频数据和瞳孔视频数据,其中,所述行为视频数据包括至少四个拍摄角度获取的视频数据。S110. Acquire behavior video data and pupil video data of the target object within the same time period, wherein the behavior video data includes video data obtained from at least four shooting angles.
目标对象是指实验研究的对象,通常在脑科学实验研究中的实验对象是小鼠、猴子、狗等实验动物。在采集目标对象的行为视频数据时,会采用4个或以上的摄像头采集不同方位角度的视频数据,以在不同的角度观察目标对象的行为,在拍摄过程中,目标对象的行为活动是自由的不受到限制的。之所以从多个角度采集目标对象的行为视频数据是在对目标对象的行为进行识别时,要对目标对象进行三维重建,可以根据多个角度的二维数据对目标对象进行三维重建。同时采集行为视频数据的多个摄像头是经过内外参标定的。校正过程可以是使用多个摄像头对预设的棋盘格进行拍照,并基于拍照结果进行校正。例如,使用棋盘格(12*9)为标定板,用拍摄位置固定的多个摄像头对不同角度的标定板进行拍摄,每个摄像头拍摄60张,随后利用MATLAB中的StereoCameraCalibrator GUI工具箱进行定标操作,得到多个相机的内外参,并进行保存,在后续对目标对象的行为进行识别的过程中进行使用。The target object refers to the object of the experimental research. Usually, the experimental objects in the experimental research of brain science are experimental animals such as mice, monkeys, and dogs. When collecting the behavior video data of the target object, 4 or more cameras are used to collect video data of different azimuth angles to observe the behavior of the target object from different angles. During the shooting process, the behavior of the target object is free Unrestricted. The reason why the behavior video data of the target object is collected from multiple angles is that when the behavior of the target object is recognized, the target object needs to be reconstructed in three dimensions, and the target object can be reconstructed in three dimensions based on the two-dimensional data from multiple angles. Multiple cameras that simultaneously collect behavioral video data are calibrated with internal and external parameters. The correction process may be to use multiple cameras to take pictures of the preset checkerboard, and perform correction based on the pictured results. For example, use a checkerboard (12*9) as the calibration board, use multiple cameras with fixed shooting positions to shoot the calibration board at different angles, each camera takes 60 pictures, and then use the StereoCameraCalibrator GUI toolbox in MATLAB to perform calibration Operation to obtain the internal and external parameters of multiple cameras, save them, and use them in the subsequent process of recognizing the behavior of the target object.
瞳孔视频数据是通过红外摄像头同步记录的目标对象的瞳孔视频数据。可以在目标对象的头上佩戴一个摄像头固定装置,并通过该摄像头固定装置,将红外摄像头进行固定,使红外摄像头在其固定位置的拍摄视野对准目标对象的眼睛部位,能够拍摄到瞳孔的变化信息。红外摄像头可以设置一个或两个。在一些实施例中,若目标对象本身体积较小,头部承重能力有限,为避免头戴的红外摄像头过重,影响目标对象的活动,可以仅设置一个红外摄像头,采集一只眼睛的瞳孔变化信息。对于体积较大的目标对象,也可以使用两个红外摄像头,同时记录两只眼睛的瞳孔变化信息。The pupil video data is the pupil video data of the target object recorded synchronously by the infrared camera. A camera fixing device can be worn on the head of the target object, and the infrared camera can be fixed through the camera fixing device, so that the shooting field of view of the infrared camera at its fixed position can be aligned with the eyes of the target object, and the change of the pupil can be captured information. One or two infrared cameras can be installed. In some embodiments, if the target object itself is small in size and the head has limited load-bearing capacity, in order to prevent the infrared camera worn on the head from being too heavy and affecting the activities of the target object, only one infrared camera can be set to collect the pupil changes of one eye information. For larger target objects, two infrared cameras can also be used to simultaneously record the pupil change information of the two eyes.
S120、对所述行为视频数据和所述瞳孔视频数据分别进行分析,确定所述目标对象的行为信息以及瞳孔信息。S120. Analyze the behavior video data and the pupil video data respectively, and determine behavior information and pupil information of the target object.
在对行为视频数据进行分析时,首先,采用预设姿态估计模型对行为视频数据进行分析,确定所述行为视频数据中每一帧视频图像中所述目标对象的预设骨架关键点。该步骤中的预设姿态估计模型可以是预先训练好的DeepLabCut深度学习模型,针对不同的目标对象,均可以匹配到适合的DeepLabCut深度学习模型,将行为视频数据中的视频图像输入至预先训练好的DeepLabCut深度学习模型中,可以提取出与目标对象对应的骨架关键点,例如,当目标对象为小鼠时,骨架关键点可以是四肢、鼻子、双耳、头部、躯干、尾部等部位的多个可用于展示目标对象姿态的关键点,如图2所示的小鼠的多个关键点(不同灰度值的正方形小点)。基于在同一时间点的不同角度的视频图像中提取出的二 维姿态估计,即多个骨架关键点,可以通过三角测量算法对目标对象进行三维重建。然后,可以将目标对象的三维重建结果,输入到预设行为分析模型,对目标对象的行为进行分类,确定所述目标对象的行为信息。其中,预设的行为分类模型可以是预先训练好的Behavior Atlas行为分析模型,该模型可以自动化无监督的对动物三维行为分析,确定目标对象的行为类别。在Behavior Atlas行为分析模型进行行为分析的过程中,会对输入的目标对象的三维重建结果的时间序列进行切分,并将切分好的序列进行统一流形逼近与投影(Uniform Manifold Approximation and Projection,UMAP)降维,随后进行层次聚类将行为分成多个行为类别。When analyzing the behavior video data, firstly, the behavior video data is analyzed using a preset pose estimation model to determine the preset skeleton key points of the target object in each frame of video image in the behavior video data. The preset pose estimation model in this step can be a pre-trained DeepLabCut deep learning model, which can be matched to a suitable DeepLabCut deep learning model for different target objects, and the video images in the behavioral video data can be input into the pre-trained In the DeepLabCut deep learning model, the skeleton key points corresponding to the target object can be extracted. For example, when the target object is a mouse, the skeleton key points can be limbs, nose, ears, head, torso, tail, etc. Multiple key points that can be used to display the posture of the target object, such as the multiple key points of the mouse (small square points with different gray values) shown in Figure 2 . Based on 2D pose estimates extracted from video images from different angles at the same time point, that is, multiple skeletal keypoints, the target object can be reconstructed in 3D by a triangulation algorithm. Then, the 3D reconstruction result of the target object may be input into a preset behavior analysis model to classify the behavior of the target object and determine the behavior information of the target object. Among them, the preset behavior classification model can be a pre-trained Behavior Atlas behavior analysis model, which can automatically analyze the three-dimensional behavior of animals without supervision, and determine the behavior category of the target object. In the behavior analysis process of the Behavior Atlas behavior analysis model, the time series of the 3D reconstruction results of the input target object will be segmented, and the segmented sequence will be subjected to Uniform Manifold Approximation and Projection (Uniform Manifold Approximation and Projection) , UMAP) dimensionality reduction, followed by hierarchical clustering to divide behaviors into multiple behavior categories.
在对瞳孔视频数据进行分析时,首先,基于预设标记点提取模型,提取瞳孔视频数据中每一帧视频图像中的预设瞳孔外周标记点(如图3所示的8个瞳孔外周标记点-不同灰度的圆点)、瞳孔中心点和预设眼球外周标记点(如图4所示的眼球外周标记点-不同灰度的圆点)。其中,预设标记点提取模型也可以是预先训练的DeepLabCut深度学习模型,可以采用目标对象的预先标记了瞳孔外周关键点与眼球外周关键点的瞳孔图像对DeepLabCut深度学习模型进行训练,然后,将训练好的模型用于瞳孔视频数据中瞳孔与眼球的关键点的识别。根据预设瞳孔外周标记点和瞳孔中心点确定目标对象的瞳孔直径。基于预设眼球外周标记点确定目标对象的眼球中心位置,再基于眼球中心位置与瞳孔中心点的位置,确定目标对象的瞳孔与眼球的相对位置。将目标对象的瞳孔直径以及目标对象的瞳孔与眼球的相对位置作为目标对象的瞳孔信息。在该步骤中,可以采用椭圆拟合的方式确定眼球的中心位置,以及拟合出的椭球对应的两条对称轴的长短。眼球的中心位置是保持固定的,瞳孔的中心位置是根据目标对象的行为活动进行变化的。可以分析瞳孔的中心位置与眼球中心位置的相对位置关系的变化,确定瞳孔在目标对象发生的一系列行为反映过程中的变化。When analyzing the pupil video data, first, based on the preset marker point extraction model, extract the preset pupil peripheral marker points in each frame of video image in the pupil video data (8 pupil peripheral marker points as shown in Figure 3 -dots with different grayscales), pupil center point and preset eyeball peripheral marking points (the eyeball peripheral marking points-dots with different grayscales as shown in Figure 4). Among them, the preset marker point extraction model can also be a pre-trained DeepLabCut deep learning model, and the DeepLabCut deep learning model can be trained by using the pupil images of the target object that have pre-marked the key points around the pupil and the key points around the eyeball, and then the The trained model is used to identify the key points of the pupil and eyeball in the pupil video data. The pupil diameter of the target object is determined according to the preset pupil peripheral marker points and the pupil center point. The eyeball center position of the target object is determined based on the preset eyeball peripheral marker points, and then the relative position between the pupil and the eyeball of the target object is determined based on the eyeball center position and the pupil center point. The pupil diameter of the target object and the relative position between the pupil of the target object and the eyeball are used as the pupil information of the target object. In this step, the center position of the eyeball and the lengths of the two symmetry axes corresponding to the fitted ellipsoid can be determined by means of ellipse fitting. The center position of the eyeball remains fixed, and the center position of the pupil changes according to the behavior of the target object. Changes in the relative positional relationship between the center position of the pupil and the center position of the eyeball can be analyzed to determine changes in the pupil during a series of behavioral reactions of the target object.
在确定目标对象的瞳孔与眼球的相对位置之前,还需要对瞳孔视频数据中每一帧视频图像中对应的瞳孔中心点位置进行滤波。这是因为目标对象并不是一直睁着眼睛的,有的时候会闭眼睛。那么,在一些瞳孔视频图像中,目标对象的眼睛闭起来是提取不到准确的瞳孔中心点位置的,该位置信息会是异常数值。便可以采用中值滤波方法,将异常的数值进行滤除。Before determining the relative position of the pupil of the target object and the eyeball, it is also necessary to filter the position of the center point of the pupil corresponding to each frame of the video image in the pupil video data. This is because the target object does not keep their eyes open all the time, and sometimes they close their eyes. Then, in some pupil video images, the exact position of the pupil center point cannot be extracted when the eyes of the target object are closed, and the position information will be an abnormal value. Then, the median filtering method can be used to filter out abnormal values.
此外,在采集对目标对象瞳孔视频数据时,可能是分批次进行数据采集,会存在红外相机与目标对象固定的相对位置不同的情况,那么,对于同一个目标对象来说,进行椭圆拟合得到的眼球是不一样的,需要进行校正。针对目标对象在不同数据采集批次采集的瞳孔视频数据,可以将预设眼球外周标记点输入至预设椭圆拟合算法,确定目标对象的眼球中心位置;对经过椭圆拟合算法确定的眼球中心位置对应的椭球对称轴进行修正。例如,选取椭圆拟合结果中 对称轴数值较大的一组数据作为标准组,将椭圆拟合结果中对称轴数值较小的一组或多组数据进行修正,乘以一个稀疏,使不同批次采集的数据保持一致。In addition, when collecting the pupil video data of the target object, the data may be collected in batches, and there may be situations where the fixed relative positions of the infrared camera and the target object are different. Then, for the same target object, the ellipse fitting The resulting eyeballs are not the same and need to be corrected. For the pupil video data collected in different data collection batches of the target object, the preset eyeball peripheral marker points can be input into the preset ellipse fitting algorithm to determine the eyeball center position of the target object; the eyeball center determined by the ellipse fitting algorithm The corresponding ellipsoidal axis of symmetry is corrected. For example, select a group of data with a larger value of the symmetry axis in the ellipse fitting result as the standard group, modify one or more groups of data with a smaller value of the symmetry axis in the ellipse fitting result, and multiply it by a sparse value to make different batches The collected data remains consistent.
S130、根据所述行为视频数据和所述瞳孔视频数据的采集时间,建立所述行为信息与所述瞳孔信息的关联关系。S130. Establish an association relationship between the behavior information and the pupil information according to the behavior video data and the collection time of the pupil video data.
在进行行为信息与瞳孔信息的匹配时,首先,要检查瞳孔视频数据与行为视频数据的总帧数、视频时长与采样率是否一致,即瞳孔视频数据与行为视频数据在视频图像帧的时间序列上是否能够对齐。针对存在丢帧情况的视频数据可以采用线性插值的方法进行补帧(补充视频图像帧)。例如,以秒为单位,将1秒钟内相邻视频图像帧时间间隔最大的位置处进行线性插入。或者,对于非丢帧视频数据,可以将图像帧时间间隔最小的位置处,删除相应数量的视频图像。此外,对于采样率不同的情况,可以采用interp1函数进行均匀插帧。When matching behavior information and pupil information, first, check whether the total number of frames, video duration, and sampling rate of pupil video data and behavior video data are consistent, that is, the time sequence of pupil video data and behavior video data in video image frames whether it can be aligned. A linear interpolation method may be used to supplement frames (supplement video image frames) for video data in which frames are lost. For example, in seconds, linearly interpolate the position where the time interval between adjacent video image frames is the largest within 1 second. Alternatively, for non-drop frame video data, a corresponding number of video images may be deleted at the position where the time interval of image frames is the smallest. In addition, for the case of different sampling rates, the interp1 function can be used to interpolate frames evenly.
在进行视频补帧之后,便进行行为信息与瞳孔信息的匹配,以建立两者之间的关联关系。可以根据补帧操作后的视频数据,建立预设时间窗的行为信息和瞳孔信息的关联关系。其中,预设时间窗是一段指定的时长,可以是表现出一种行为并且持续的时长,如根据对目标对象进行外界刺激的时间点确定的。在采集目标对象的行为视频数据与瞳孔视频数据过程中,可能会对目标对象进行光刺激,或者恐吓刺激等外界刺激,以观察目标对象的应激反应行为,以及产生应激反应时瞳孔状态的变化。在对目标对象进行外界刺激时,会记录开始刺激和结束刺激的时间。那么,预设时间窗便可以是从开始对目标对象的外界刺激到结束该外界刺激的时间段。将同一时间段内的行为信息和瞳孔信息进行匹配,即完成数据分析过程。After complementing the frame of the video, the behavior information is matched with the pupil information to establish the relationship between the two. According to the video data after the frame complement operation, the association relationship between the behavior information of the preset time window and the pupil information can be established. Wherein, the preset time window is a specified period of time, which may be a period of time during which a behavior is exhibited and lasts, such as being determined according to the time point when external stimulation is performed on the target object. In the process of collecting the behavior video data and pupil video data of the target object, the target object may be stimulated by light, or external stimuli such as intimidation stimulation, so as to observe the stress response behavior of the target object and the state of the pupil when the stress response occurs. Variety. When external stimulation is given to the target subject, the time of start and end of stimulation will be recorded. Then, the preset time window may be a time period from the start of the external stimulus to the target object to the end of the external stimulus. Matching the behavior information and pupil information in the same time period completes the data analysis process.
本实施例的技术方案,通过在目标对象自由移动的前提获取目标对象在多个角度的行为视频数据,以及相同时间段内目标对象的瞳孔视频数据,并对行为视频数据和瞳孔视频数据分别进行分析,确定目标对象的行为信息以及瞳孔信息;最终可以根据行为视频数据和瞳孔视频数据的采集时间,建立行为信息与瞳孔信息的关联关系。本申请实施例的技术方案解决了相关技术中同步分析实验小鼠的行为信息与瞳孔信息技术空白的问题,能够实现同步采集目标对象的行为视频与瞳孔视频,并分析对应的行为信息和瞳孔信息,以将行为信息和瞳孔信息进行匹配,为脑科学以及神经机制环路等研究提供研究数据。In the technical solution of this embodiment, the behavior video data of the target object at multiple angles and the pupil video data of the target object in the same time period are acquired on the premise that the target object moves freely, and the behavior video data and the pupil video data are separately Analyze and determine the behavior information and pupil information of the target object; finally, the association between behavior information and pupil information can be established according to the collection time of behavior video data and pupil video data. The technical solution of the embodiment of the present application solves the problem of synchronously analyzing the behavior information and pupil information technology of experimental mice in the related art, and can realize synchronous collection of the behavior video and pupil video of the target object, and analyze the corresponding behavior information and pupil information , to match behavioral information and pupil information, and provide research data for brain science and neural mechanism circuits.
实施例二Embodiment two
图5为本申请实施例二提供的行为与瞳孔信息同步分析装置的结构示意图,本实施例与上述实施例中的行为与瞳孔信息同步分方法属于同一个构思,适用于研究行为与脑神经活动的情况,例如适用于以小鼠为研究对象同步分析行为信息与瞳孔信息的场景,该装置可以由软件和/或硬件的方式来实现,集成于具 有应用开发功能的服务器设备中。Fig. 5 is a schematic structural diagram of the behavior and pupil information synchronous analysis device provided by the second embodiment of the present application. This embodiment and the behavior and pupil information synchronization method in the above embodiment belong to the same idea, and are suitable for studying behavior and brain nerve activity For example, it is suitable for synchronous analysis of behavioral information and pupil information with mice as research objects. The device can be realized by software and/or hardware, and integrated into a server device with application development functions.
如图5所示,行为与瞳孔信息同步分析装置包括:数据采集模块210、数据分析模块220和数据匹配模块230。As shown in FIG. 5 , the behavior and pupil information synchronous analysis device includes: a data collection module 210 , a data analysis module 220 and a data matching module 230 .
数据采集模块210,设置为获取目标对象在相同时间段内的行为视频数据和瞳孔视频数据,其中,所述行为视频数据包括至少四个拍摄角度获取的视频数据;数据分析模块220,设置为对所述行为视频数据和所述瞳孔视频数据分别进行分析,确定所述目标对象的行为信息以及瞳孔信息;数据匹配模块230,设置为根据所述行为视频数据和所述瞳孔视频数据的采集时间,建立所述行为信息与所述瞳孔信息的关联关系。The data acquisition module 210 is configured to acquire behavior video data and pupil video data of the target object within the same time period, wherein the behavior video data includes video data obtained from at least four shooting angles; the data analysis module 220 is configured to The behavior video data and the pupil video data are analyzed separately to determine the behavior information and pupil information of the target object; the data matching module 230 is configured to, according to the acquisition time of the behavior video data and the pupil video data, An association relationship between the behavior information and the pupil information is established.
本实施例的技术方案,通过获取目标对象在多个角度的行为视频数据,以及相同时间段目标对象的瞳孔视频数据,并对行为视频数据和瞳孔视频数据分别进行分析,确定目标对象的行为信息以及瞳孔信息;最终可以根据行为视频数据和瞳孔视频数据的采集时间,建立行为信息与瞳孔信息的关联关系。本申请实施例的技术方案解决了相关技术中同步分析实验小鼠的行为信息与瞳孔信息技术空白的问题,能够实现同步采集目标对象的行为视频与瞳孔视频,并分析对应的行为信息和瞳孔信息,以将行为信息和瞳孔信息进行匹配,为脑科学以及神经机制环路等研究提供研究数据。The technical solution of this embodiment determines the behavior information of the target object by acquiring the behavior video data of the target object at multiple angles and the pupil video data of the target object in the same time period, and analyzing the behavior video data and the pupil video data respectively and pupil information; finally, an association relationship between behavior information and pupil information can be established according to the collection time of the behavior video data and the pupil video data. The technical solution of the embodiment of the present application solves the problem of synchronously analyzing the behavior information and pupil information technology of experimental mice in the related art, and can realize synchronous collection of the behavior video and pupil video of the target object, and analyze the corresponding behavior information and pupil information , to match behavioral information and pupil information, and provide research data for brain science and neural mechanism circuits.
在一种可选的实施方式中,数据分析模块220包括行为信息分析子模块,设置为:In an optional implementation manner, the data analysis module 220 includes a behavior information analysis submodule, which is configured to:
采用预设姿态估计模型对所述行为视频数据进行分析,确定所述行为视频数据中每一帧视频图像中所述目标对象的预设骨架关键点;根据所述行为视频数据中相同时间点的不同角度的视频图像中的预设骨架关键点,对所述目标对象进行三维重建;将所述目标对象的三维重建结果,输入到预设行为分析模型,对所述目标对象的行为进行分类,确定所述目标对象的行为信息。Using a preset attitude estimation model to analyze the behavior video data, determine the preset skeleton key points of the target object in each frame of video image in the behavior video data; according to the same time point in the behavior video data performing 3D reconstruction on the target object based on preset skeleton key points in video images from different angles; inputting the 3D reconstruction result of the target object into a preset behavior analysis model to classify the behavior of the target object, Determine the behavior information of the target object.
在一种可选的实施方式中,数据分析模块220包括瞳孔信息分析子模块,设置为:In an optional implementation manner, the data analysis module 220 includes a pupil information analysis submodule, which is configured to:
基于预设标记点提取模型,提取所述瞳孔视频数据中每一帧视频图像中的预设瞳孔外周标记点、瞳孔中心点和预设眼球外周标记点;根据所述预设瞳孔外周标记点和所述瞳孔中心点确定所述目标对象的瞳孔直径;基于所述预设眼球外周标记点确定所述目标对象的眼球中心位置,并基于所述眼球中心位置与所述瞳孔中心点的位置,确定所述目标对象的瞳孔与眼球的相对位置;将所述目标对象的瞳孔直径以及所述目标对象的瞳孔与眼球的相对位置作为所述目标对象的瞳孔信息。Based on the preset marker point extraction model, extract the preset pupil peripheral marker points, pupil center points and preset eyeball peripheral marker points in each frame of video image in the pupil video data; according to the preset pupil peripheral marker points and The pupil center point determines the pupil diameter of the target object; determines the eyeball center position of the target object based on the preset eyeball peripheral marker points, and determines the eyeball center position based on the eyeball center position and the pupil center point position The relative position of the pupil of the target object and the eyeball; the pupil diameter of the target object and the relative position of the pupil of the target object and the eyeball are used as the pupil information of the target object.
在一种可选的实施方式中,瞳孔信息分析子模块,还设置为:In an optional implementation manner, the pupil information analysis submodule is also set to:
在确定所述目标对象的瞳孔与眼球的相对位置之前,对所述瞳孔视频数据中每一帧视频图像中对应的瞳孔中心点位置进行滤波。Before determining the relative position of the pupil of the target object and the eyeball, filtering is performed on the position of the center point of the pupil corresponding to each frame of video image in the pupil video data.
在一种可选的实施方式中,瞳孔信息分析子模块,还设置为:In an optional implementation manner, the pupil information analysis submodule is also set to:
针对所述目标对象在不同数据采集批次采集的瞳孔视频数据,将所述预设眼球外周标记点输入至预设椭圆拟合算法,确定所述目标对象的眼球中心位置;For the pupil video data collected in different data collection batches of the target object, input the preset eyeball peripheral marker points into a preset ellipse fitting algorithm to determine the eyeball center position of the target object;
对经过所述椭圆拟合算法确定的眼球中心位置对应的椭球对称轴进行修正。The ellipsoid symmetry axis corresponding to the eyeball center position determined by the ellipse fitting algorithm is corrected.
在一种可选的实施方式中,数据匹配模块230设置为:In an optional implementation manner, the data matching module 230 is set to:
比较所述行为视频数据和所述瞳孔视频数据的图像帧数量、时长以及视频采样率是否相同,并确定比较结果;根据比较结果对图像帧数量相对少的视频数据进行补帧操作;根据补帧操作后的视频数据,建立预设时间窗的行为信息和瞳孔信息的关联关系,其中,所述预设时间窗是根据对所述目标对象进行外界刺激的时间点确定的。Comparing the number of image frames, the duration and the video sampling rate of the behavior video data and the pupil video data are the same, and determining the comparison result; according to the comparison result, the video data with a relatively small number of image frames is subjected to a supplementary frame operation; according to the supplementary frame The manipulated video data establishes a relationship between behavior information and pupil information in a preset time window, wherein the preset time window is determined according to the time point when external stimulation is performed on the target object.
在一种可选的实施方式中,数据匹配模块230还设置为:In an optional implementation manner, the data matching module 230 is also configured to:
针对图像帧数量、时长或视频采样率较小的视频数据,采用线性插值方法,补充视频图像帧。For video data with small number of image frames, duration or video sampling rate, the linear interpolation method is used to supplement video image frames.
本申请实施例所提供的行为与瞳孔信息同步分析装置可执行本申请任意实施例所提供的行为与瞳孔信息同步分析方法,具备执行方法相应的功能模块和效果。The behavior and pupil information synchronous analysis device provided in the embodiment of the present application can execute the behavior and pupil information synchronous analysis method provided in any embodiment of the present application, and has corresponding functional modules and effects for executing the method.
实施例三Embodiment Three
图6为本申请实施例三提供的一种计算机设备的结构示意图。图6示出了适于用来实现本申请实施方式的示例性计算机设备12的框图。图6显示的计算机设备12仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。计算机设备12可以任意具有计算能力的终端设备,如智能控制器及服务器、手机等终端设备。FIG. 6 is a schematic structural diagram of a computer device provided in Embodiment 3 of the present application. FIG. 6 shows a block diagram of an exemplary computer device 12 suitable for implementing embodiments of the present application. The computer device 12 shown in FIG. 6 is only an example, and should not limit the functions and scope of use of the embodiment of the present application. The computer device 12 can be any terminal device with computing capability, such as an intelligent controller, a server, a mobile phone and other terminal devices.
如图6所示,计算机设备12以通用计算设备的形式表现。计算机设备12的组件可以包括但不限于:一个或者多个处理器或者处理单元16,系统存储器28,连接不同系统组件(包括系统存储器28和处理单元16)的总线18。As shown in FIG. 6, computer device 12 takes the form of a general-purpose computing device. Components of computer device 12 may include, but are not limited to: one or more processors or processing units 16 , system memory 28 , bus 18 connecting various system components including system memory 28 and processing unit 16 .
总线18表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构 (Extended Industry Standard Architecture,ISA)总线,微通道体系结构(Micro Channel Architecture,MAC)总线,增强型ISA总线、视频电子标准协会(Video Electronics Standards Association,VESA)局域总线以及外围组件互连(Peripheral Component Interconnect,PCI)总线。 Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus structures. For example, these architectures include, but are not limited to, the Industry Standard Architecture (Extended Industry Standard Architecture, ISA) bus, Micro Channel Architecture (Micro Channel Architecture, MAC) bus, Enhanced ISA bus, Video Electronics Standards Association (Video Electronics Standards Association (VESA) local bus and Peripheral Component Interconnect (PCI) bus.
计算机设备12包括多种计算机系统可读介质。这些介质可以是任何能够被计算机设备12访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。 Computer device 12 includes a variety of computer system readable media. These media can be any available media that can be accessed by computer device 12 and include both volatile and nonvolatile media, removable and non-removable media.
系统存储器28可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(Random Access Memory,RAM)30和/或高速缓存存储器32。计算机设备12可以包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统34可以设置为读写不可移动的、非易失性磁介质(图6未显示,通常称为“硬盘驱动器”)。尽管图6中未示出,可以提供设置为对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如只读光盘存储器(Compact Disc Read-Only Memory,CD-ROM),数字通用光盘只读存储器(Digital Video Disc Read-Only Memory,DVD-ROM)或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线18相连。系统存储器28可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本申请实施例的功能。 System memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32 . Computer device 12 may include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be configured to read and write to non-removable, non-volatile magnetic media (not shown in FIG. 6, commonly referred to as a "hard drive"). Although not shown in FIG. 6, a disk drive configured to read and write to a removable non-volatile disk (such as a "floppy disk") may be provided, as well as a removable non-volatile disk (such as a Compact Disc Read-Only Memory, CD-ROM), Digital Video Disc Read-Only Memory (DVD-ROM) or other optical media) CD-ROM drive. In these cases, each drive may be connected to bus 18 via one or more data media interfaces. The system memory 28 may include at least one program product having a set (eg, at least one) of program modules configured to perform the functions of the embodiments of the present application.
具有一组(至少一个)程序模块42的程序/实用工具40,可以存储在例如系统存储器28中,这样的程序模块42包括但不限于操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或一种组合中可能包括网络环境的实现。程序模块42通常执行本申请所描述的实施例中的功能和/或方法。Program/utility 40 may be stored, for example, in system memory 28 as a set (at least one) of program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or a combination of these examples may include implementations of the network environment. The program modules 42 generally perform the functions and/or methods of the embodiments described herein.
计算机设备12也可以与一个或多个外部设备14(例如键盘、指向设备、显示器24等)通信,还可与一个或者多个使得用户能与该计算机设备12交互的设备通信,和/或与使得该计算机设备12能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(Input/Output,I/O)接口22进行。并且,计算机设备12还可以通过网络适配器20与一个或者多个网络(例如局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器20通过总线18与计算机设备12的其它模块通信。应当明白,尽管图6中未示出,可以结合计算机设备12使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、磁盘阵列 (Redundant Arrays of Independent Disks,RAID)系统、磁带驱动器以及数据备份存储系统等。The computer device 12 may also communicate with one or more external devices 14 (e.g., a keyboard, pointing device, display 24, etc.), and with one or more devices that enable a user to interact with the computer device 12, and/or with Any device (eg, network card, modem, etc.) that enables the computing device 12 to communicate with one or more other computing devices. This communication can be performed through an input/output (Input/Output, I/O) interface 22 . Moreover, the computer device 12 can also communicate with one or more networks (such as a local area network (Local Area Network, LAN), a wide area network (Wide Area Network, WAN) and/or a public network, such as the Internet) through the network adapter 20. As shown, network adapter 20 communicates with other modules of computer device 12 via bus 18 . It should be appreciated that although not shown in FIG. 6 , other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, disk arrays (Redundant Arrays of Independent Disks, RAID) systems, tape drives, and data backup storage systems.
处理单元16通过运行存储在系统存储器28中的程序,从而执行多种功能应用以及数据处理,例如实现本发实施例所提供的行为与瞳孔信息同步分析方法,包括:The processing unit 16 executes a variety of functional applications and data processing by running the program stored in the system memory 28, such as implementing the behavior and pupil information synchronization analysis method provided by the embodiment of the present invention, including:
获取目标对象在相同时间段内的行为视频数据和瞳孔视频数据,其中,所述行为视频数据包括至少四个拍摄角度获取的视频数据;对所述行为视频数据和所述瞳孔视频数据分别进行分析,确定所述目标对象的行为信息以及瞳孔信息;根据所述行为视频数据和所述瞳孔视频数据的采集时间,建立所述行为信息与所述瞳孔信息的关联关系。Obtaining behavior video data and pupil video data of the target object within the same time period, wherein the behavior video data includes video data obtained from at least four shooting angles; analyzing the behavior video data and the pupil video data respectively , determining the behavior information and pupil information of the target object; establishing an association relationship between the behavior information and the pupil information according to the behavior video data and the collection time of the pupil video data.
实施例四Embodiment four
本实施例四提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本申请任意实施例所提供的行为与瞳孔信息同步分析方法,包括:Embodiment 4 of the present application provides a computer-readable storage medium on which a computer program is stored. When the program is executed by a processor, the method for synchronously analyzing behavior and pupil information as provided in any embodiment of the present application is implemented, including:
获取目标对象在相同时间段内的行为视频数据和瞳孔视频数据,其中,所述行为视频数据包括至少四个拍摄角度获取的视频数据;对所述行为视频数据和所述瞳孔视频数据分别进行分析,确定所述目标对象的行为信息以及瞳孔信息;根据所述行为视频数据和所述瞳孔视频数据的采集时间,建立所述行为信息与所述瞳孔信息的关联关系。Obtaining behavior video data and pupil video data of the target object within the same time period, wherein the behavior video data includes video data obtained from at least four shooting angles; analyzing the behavior video data and the pupil video data respectively , determining the behavior information and pupil information of the target object; establishing an association relationship between the behavior information and the pupil information according to the behavior video data and the collection time of the pupil video data.
本申请实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是但不限于:电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、RAM、ROM、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM或闪存)、光纤、CD-ROM、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。The computer storage medium in the embodiments of the present application may use any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer-readable storage medium may be, for example but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples (non-exhaustive list) of computer-readable storage media include: electrical connections with one or more conductors, portable computer disks, hard disks, RAM, ROM, Erasable Programmable Read-Only Memory, EPROM or flash memory), optical fiber, CD-ROM, optical storage device, magnetic storage device, or any suitable combination of the above. In this document, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算 机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。A computer readable signal medium may include a data signal carrying computer readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。The program code contained on the computer readable medium can be transmitted by any appropriate medium, including but not limited to: wireless, electric wire, optical cable, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络,包括LAN或WAN,连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program codes for performing the operations of the present application may be written in one or more programming languages or combinations thereof, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional A procedural programming language, such as the "C" language or similar programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. Where a remote computer is involved, the remote computer can be connected to the user computer through any kind of network, including a LAN or WAN, or it can be connected to an external computer (eg, via the Internet using an Internet service provider).
本领域普通技术人员应该明白,上述的本申请的多个模块或多个步骤可以用通用的计算装置来实现,它们可以集中在单个计算装置上,或者分布在多个计算装置所组成的网络上,可选地,他们可以用计算机装置可执行的程序代码来实现,从而可以将它们存储在存储装置中由计算装置来执行,或者将它们分别制作成多个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本申请不限制于任何特定的硬件和软件的结合。Those of ordinary skill in the art should understand that the above-mentioned multiple modules or multiple steps of the present application can be realized by general-purpose computing devices, and they can be concentrated on a single computing device, or distributed on a network formed by multiple computing devices , alternatively, they can be implemented with computer device executable program codes, so that they can be stored in a storage device and executed by a computing device, or they can be made into a plurality of integrated circuit modules respectively, or one of them Multiple modules or steps are implemented as a single integrated circuit module. As such, the application is not limited to any specific combination of hardware and software.

Claims (10)

  1. 一种行为与瞳孔信息同步分析方法,包括:A method for synchronous analysis of behavior and pupil information, comprising:
    获取目标对象在相同时间段内的行为视频数据和瞳孔视频数据,其中,所述行为视频数据包括至少四个拍摄角度获取的视频数据;Acquiring behavior video data and pupil video data of the target object within the same time period, wherein the behavior video data includes video data obtained from at least four shooting angles;
    对所述行为视频数据和所述瞳孔视频数据分别进行分析,确定所述目标对象的行为信息以及瞳孔信息;Analyzing the behavior video data and the pupil video data respectively to determine the behavior information and pupil information of the target object;
    根据所述行为视频数据和所述瞳孔视频数据的采集时间,建立所述行为信息与所述瞳孔信息的关联关系。According to the collection time of the behavior video data and the pupil video data, an association relationship between the behavior information and the pupil information is established.
  2. 根据权利要求1所述的方法,其中,对所述行为视频数据进行分析,确定所述目标对象的行为信息,包括:The method according to claim 1, wherein analyzing the behavior video data to determine the behavior information of the target object comprises:
    采用预设姿态估计模型对所述行为视频数据进行分析,确定所述行为视频数据中每一帧视频图像中所述目标对象的预设骨架关键点;Analyzing the behavioral video data using a preset pose estimation model to determine the preset skeleton key points of the target object in each frame of video image in the behavioral video data;
    根据所述行为视频数据中相同时间点的不同角度的视频图像中的预设骨架关键点,对所述目标对象进行三维重建;Carrying out three-dimensional reconstruction of the target object according to preset skeleton key points in video images of different angles at the same time point in the behavioral video data;
    将所述目标对象的三维重建结果,输入到预设行为分析模型,对所述目标对象的行为进行分类,确定所述目标对象的行为信息。Input the 3D reconstruction result of the target object into a preset behavior analysis model, classify the behavior of the target object, and determine the behavior information of the target object.
  3. 根据权利要求1所述的方法,其中,对所述瞳孔视频数据进行分析,确定所述目标对象的瞳孔信息,包括:The method according to claim 1, wherein analyzing the pupil video data to determine pupil information of the target object comprises:
    基于预设标记点提取模型,提取所述瞳孔视频数据中每一帧视频图像中的预设瞳孔外周标记点、瞳孔中心点和预设眼球外周标记点;Based on the preset marker point extraction model, extracting preset pupil peripheral marker points, pupil center points and preset eyeball peripheral marker points in each frame of video image in the pupil video data;
    根据所述预设瞳孔外周标记点和所述瞳孔中心点确定所述目标对象的瞳孔直径;determining the pupil diameter of the target object according to the preset pupil peripheral marker points and the pupil center point;
    基于所述预设眼球外周标记点确定所述目标对象的眼球中心位置,并基于所述眼球中心位置与所述瞳孔中心点的位置,确定所述目标对象的瞳孔与眼球的相对位置;Determine the eyeball center position of the target object based on the preset eyeball peripheral marker points, and determine the relative position between the pupil and the eyeball of the target object based on the eyeball center position and the pupil center point;
    将所述目标对象的瞳孔直径以及所述目标对象的瞳孔与眼球的相对位置作为所述目标对象的瞳孔信息。The pupil diameter of the target object and the relative position between the pupil of the target object and the eyeball are used as the pupil information of the target object.
  4. 根据权利要求3所述的方法,在所述确定所述目标对象的瞳孔与眼球的相对位置之前,还包括:The method according to claim 3, before said determining the relative position of the pupil and the eyeball of the target object, further comprising:
    对所述瞳孔视频数据中每一帧视频图像中的瞳孔中心点的位置进行滤波。Filtering is performed on the position of the center point of the pupil in each frame of video image in the pupil video data.
  5. 根据权利要求3所述的方法,其中,针对所述目标对象在不同数据采集批次采集的瞳孔视频数据,所述基于所述预设眼球外周标记点确定所述目标对象 的眼球中心位置,包括:The method according to claim 3, wherein, for the pupil video data collected by the target object in different data collection batches, the determination of the eyeball center position of the target object based on the preset eyeball peripheral marker points includes :
    将所述预设眼球外周标记点输入至预设椭圆拟合算法,确定所述目标对象的眼球中心位置;Input the preset eyeball peripheral marker points into a preset ellipse fitting algorithm to determine the eyeball center position of the target object;
    对经过所述椭圆拟合算法确定的眼球中心位置对应的椭球对称轴进行修正。The ellipsoid symmetry axis corresponding to the eyeball center position determined by the ellipse fitting algorithm is corrected.
  6. 根据权利要求1所述的方法,其中,所述根据所述行为视频数据和所述瞳孔视频数据的采集时间,建立所述行为信息与所述瞳孔信息的关联关系,包括:The method according to claim 1, wherein said establishing an association between the behavior information and the pupil information according to the collection time of the behavior video data and the pupil video data includes:
    比较所述行为视频数据和所述瞳孔视频数据的图像帧数量、时长以及视频采样率是否相同,并确定比较结果;Comparing whether the number of image frames, duration and video sampling rate of the behavior video data and the pupil video data are the same, and determine the comparison result;
    根据比较结果对图像帧数量相对少的视频数据进行补帧操作;According to the comparison result, the frame complement operation is performed on the video data with a relatively small number of image frames;
    根据补帧操作后的视频数据,建立预设时间窗的所述行为信息和所述瞳孔信息的关联关系,其中,所述预设时间窗是根据对所述目标对象进行外界刺激的时间点确定的。According to the video data after the supplementary frame operation, the association relationship between the behavior information of the preset time window and the pupil information is established, wherein the preset time window is determined according to the time point when external stimulation is performed on the target object of.
  7. 根据权利要求6所述的方法,其中,所述根据比较结果对图像帧数量相对少的视频数据进行补帧操作,包括:The method according to claim 6, wherein, performing frame complementing operation on video data with a relatively small number of image frames according to the comparison result comprises:
    针对图像帧数量、时长或视频采样率较小的视频数据,采用线性插值方法,补充视频图像帧。For video data with a small number of image frames, duration or video sampling rate, the linear interpolation method is used to supplement video image frames.
  8. 一种行为与瞳孔信息同步分析装置,包括:A device for synchronous analysis of behavior and pupil information, comprising:
    数据采集模块,设置为获取目标对象在相同时间段内的行为视频数据和瞳孔视频数据,其中,所述行为视频数据包括至少四个拍摄角度获取的视频数据;The data acquisition module is configured to acquire behavioral video data and pupil video data of the target object within the same time period, wherein the behavioral video data includes video data obtained from at least four shooting angles;
    数据分析模块,设置为对所述行为视频数据和所述瞳孔视频数据分别进行分析,确定所述目标对象的行为信息以及瞳孔信息;The data analysis module is configured to analyze the behavior video data and the pupil video data respectively, and determine the behavior information and pupil information of the target object;
    数据匹配模块,设置为根据所述行为视频数据和所述瞳孔视频数据的采集时间,建立所述行为信息与所述瞳孔信息的关联关系。The data matching module is configured to establish an association relationship between the behavior information and the pupil information according to the collection time of the behavior video data and the pupil video data.
  9. 一种计算机设备,包括:A computer device comprising:
    至少一个处理器;at least one processor;
    存储器,设置为存储至少一个程序;a memory configured to store at least one program;
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-7中任一项所述的行为与瞳孔信息同步分析方法。When the at least one program is executed by the at least one processor, the at least one processor implements the behavior and pupil information synchronization analysis method according to any one of claims 1-7.
  10. 一种计算机可读存储介质,存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1-7中任一项所述的行为与瞳孔信息同步分析方法。A computer-readable storage medium storing a computer program, wherein when the program is executed by a processor, the method for synchronous analysis of behavior and pupil information according to any one of claims 1-7 is realized.
PCT/CN2021/140013 2021-12-14 2021-12-21 Method and apparatus for synchronously analyzing behavior information and pupil information, and device and medium WO2023108711A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111530272.9 2021-12-14
CN202111530272.9A CN114220168A (en) 2021-12-14 2021-12-14 Behavior and pupil information synchronous analysis method, device, equipment and medium

Publications (1)

Publication Number Publication Date
WO2023108711A1 true WO2023108711A1 (en) 2023-06-22

Family

ID=80701972

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/140013 WO2023108711A1 (en) 2021-12-14 2021-12-21 Method and apparatus for synchronously analyzing behavior information and pupil information, and device and medium

Country Status (2)

Country Link
CN (1) CN114220168A (en)
WO (1) WO2023108711A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200117413A1 (en) * 2018-10-10 2020-04-16 International Business Machines Corporation Configuring computing device to utilize a multiple display arrangement by tracking eye movement
CN111160303A (en) * 2019-12-31 2020-05-15 深圳大学 Eye movement response information detection method and device, mobile terminal and storage medium
CN111598049A (en) * 2020-05-29 2020-08-28 中国工商银行股份有限公司 Cheating recognition method and apparatus, electronic device, and medium
CN113116351A (en) * 2021-04-27 2021-07-16 江苏利君智能科技有限责任公司 Dynamic attitude judgment type pupil brain machine combination device and system thereof
CN113537005A (en) * 2021-07-02 2021-10-22 福州大学 On-line examination student behavior analysis method based on attitude estimation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200117413A1 (en) * 2018-10-10 2020-04-16 International Business Machines Corporation Configuring computing device to utilize a multiple display arrangement by tracking eye movement
CN111160303A (en) * 2019-12-31 2020-05-15 深圳大学 Eye movement response information detection method and device, mobile terminal and storage medium
CN111598049A (en) * 2020-05-29 2020-08-28 中国工商银行股份有限公司 Cheating recognition method and apparatus, electronic device, and medium
CN113116351A (en) * 2021-04-27 2021-07-16 江苏利君智能科技有限责任公司 Dynamic attitude judgment type pupil brain machine combination device and system thereof
CN113537005A (en) * 2021-07-02 2021-10-22 福州大学 On-line examination student behavior analysis method based on attitude estimation

Also Published As

Publication number Publication date
CN114220168A (en) 2022-03-22

Similar Documents

Publication Publication Date Title
US20230367389A9 (en) Systems, methods, apparatuses and devices for detecting facial expression and for tracking movement and location in at least one of a virtual and augmented reality system
TWI709852B (en) System and method for anomaly detection via a multi-prediction-model architecture
US20220036135A1 (en) Method and apparatus for determining image to be labeled and model training method and apparatus
EP4101371A1 (en) Electroencephalogram signal classifying method and apparatus, electroencephalogram signal classifying model training method and apparatus, and medium
US20190188903A1 (en) Method and apparatus for providing virtual companion to a user
WO2022179548A1 (en) Electroencephalogram signal classification method and apparatus, and device, storage medium and program product
CN110555468A (en) Electroencephalogram signal identification method and system combining recursion graph and CNN
WO2021120961A1 (en) Brain addiction structure map evaluation method and apparatus
Chen et al. Neckface: Continuously tracking full facial expressions on neck-mounted wearables
WO2021196456A1 (en) Experimental living body behavior analysis method and apparatus, and device and storage medium
CN108958482A (en) A kind of similitude action recognition device and method based on convolutional neural networks
CN111222464B (en) Emotion analysis method and system
CN111026267A (en) VR electroencephalogram idea control interface system
CN114051116A (en) Video monitoring method, device and system for driving test vehicle
WO2023108711A1 (en) Method and apparatus for synchronously analyzing behavior information and pupil information, and device and medium
WO2018076371A1 (en) Gesture recognition method, network training method, apparatus and equipment
WO2023108782A1 (en) Method and apparatus for training behavior recognition model, behavior recognition method, apparatus and system, and medium
CN114359965A (en) Training method and training device
KR20210062565A (en) Brain-computer interface apparatus based on feature extraction reflecting similarity between users using distance learning and task classification method using the same
CN109215762A (en) A kind of user psychology inference system and method
WO2023018254A1 (en) Method and apparatus for diagnosing skin disease by using image processing
KR20230168094A (en) Method, system and non-transitory computer-readable recording medium for processing image for analysis of nail
Albuquerque et al. Remote Gait Type Classification System Using Markerless 2D Video. Diagnostics 2021, 11, 1824
Yordanov et al. Humanoid Robot Detecting Animals via Neural Network
CN112232274A (en) Depth image model training method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21967843

Country of ref document: EP

Kind code of ref document: A1