WO2020011069A1 - 运动轨迹的特征处理方法、设备以及计算机存储介质 - Google Patents

运动轨迹的特征处理方法、设备以及计算机存储介质 Download PDF

Info

Publication number
WO2020011069A1
WO2020011069A1 PCT/CN2019/094475 CN2019094475W WO2020011069A1 WO 2020011069 A1 WO2020011069 A1 WO 2020011069A1 CN 2019094475 W CN2019094475 W CN 2019094475W WO 2020011069 A1 WO2020011069 A1 WO 2020011069A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion trajectory
trajectory
image
time
motion
Prior art date
Application number
PCT/CN2019/094475
Other languages
English (en)
French (fr)
Inventor
蒋丹妮
何东杰
Original Assignee
中国银联股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国银联股份有限公司 filed Critical 中国银联股份有限公司
Priority to US17/258,665 priority Critical patent/US11222431B2/en
Priority to KR1020207032005A priority patent/KR102343849B1/ko
Priority to EP19834148.9A priority patent/EP3822830B1/en
Priority to JP2020554890A priority patent/JP7096904B2/ja
Publication of WO2020011069A1 publication Critical patent/WO2020011069A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03543Mice or pucks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/316User authentication by observing the pattern of computer usage, e.g. typical user behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink

Definitions

  • the present invention relates to the field of computer data processing, and in particular, to a method, a device, and a computer storage medium for processing feature of a motion track.
  • the motion trajectory generated by humans using the mouse and dragging the slider has inherent biological characteristics, which is different from the automatic operation of the machine, and has the application value of human-machine verification and risk identification.
  • the current method usually manually sets the feature extraction rules of the motion trajectory based on experience, or performs function approximation and curve fitting on the motion trajectory.
  • This technical solution has the following problems: (1) Because the feature extraction rules of the motion trajectory are manually set, the selection of features depends greatly on human prior knowledge, and the rules are strongly related to the scene and are not universal. (2) The motion trajectory has complex structural characteristics. The existing methods can only describe the trajectory characteristics unilaterally using discrete statistics such as the average and variance of kinematic variables (speed, acceleration, displacement, etc.), and the rich hidden information is difficult It is expressed and lost in the modeling process; (3) The effect of the method of function approximation and curve fitting is unstable. How to choose the appropriate curve type is difficult to define, and the error caused by the outliers on the fitting is relatively large. And the motion trajectory may not be abstracted with a suitable function, which is very prone to overfitting.
  • a method for processing a motion trajectory includes: the client collects a motion trajectory generated by a user behavior to obtain a series of An ordered point set, each track point in the ordered point set includes a position element and a time element; using the information in the position element and the time element to convert the motion trajectory into an image; and The image is subjected to image processing to obtain one or more feature vectors in the motion track.
  • the motion trajectory generated by the user behavior includes a mouse trajectory or a motion trajectory generated by touching the screen.
  • the nth point in the ordered point set is represented by [x n , y n , t n ], where x n is the abscissa, y n is the ordinate, and t n is time.
  • using the information in the position element and the time element to convert the motion trajectory into an image includes: drawing the position element and time element of each trajectory point in the ordered point set. The motion trajectory; and mapping the drawn motion trajectory to an image represented by at least one image channel value.
  • a trajectory point is drawn in a given coordinate axis area according to the position element, and a line between the trajectory points is drawn according to the time element order, thereby drawing the motion trajectory.
  • using the information in the position element and the time element to convert the motion trajectory into an image further includes: before mapping, smoothing and enhancing the drawn motion trajectory to enrich feature information .
  • the velocity in the x direction, the velocity in the y direction, and the time are extracted and mapped and transformed into a value interval of [0,255] to obtain the RGB used to represent the image Image channel value.
  • the RGB image channel values of the points where no motion trajectory passes are set to 0.
  • the above method may further include: performing human-machine recognition or verification according to the one or more feature vectors.
  • the above method may further include: obtaining operating environment data; and performing human-machine identification or verification according to the one or more feature vectors and the operating environment data.
  • a feature processing device for a motion trajectory includes: an acquisition unit configured to collect a motion trajectory generated by a user behavior to obtain a series of ordered points, the ordered points Each trajectory point in the set contains a position element and a time element; a conversion unit for converting the motion trajectory into an image by using the information in the position element and the time element; and a processing unit for converting The image is subjected to image processing to obtain one or more feature vectors in the motion track.
  • the motion trajectory generated by the user behavior includes a mouse trajectory or a motion trajectory generated by touching the screen.
  • the n-th point in the ordered point set is represented by [x n , y n , t n ], where x n is the abscissa, y n is the ordinate, and t n is time.
  • the conversion unit is configured to draw the motion trajectory according to a position element and a time element of each trajectory point in the ordered point set; and map the drawn motion trajectory to at least one image The image represented by the channel value.
  • the conversion unit is configured to draw trajectory points within a given coordinate axis area according to the position element, and draw a line between the trajectory points in order of the time element order, thereby drawing the motion Track.
  • the conversion unit is configured to smooth and enhance the drawn motion trajectory before mapping, thereby enriching the feature information.
  • the conversion unit is configured to extract the velocity in the x direction, the velocity in the y direction, and the time of the drawn motion trajectory, and map and transform it to a value interval of [0,255] to obtain The RGB image channel value used to represent the image.
  • the conversion unit is further configured to set all RGB image channel values of the points where no motion trajectory passes to 0.
  • the above device may further include a first recognition unit configured to perform human-machine recognition or verification according to the one or more feature vectors.
  • the above device may further include: an obtaining unit for obtaining operating environment data; and a second identification unit for performing human-machine identification or verification based on the one or more feature vectors and the operating environment data.
  • a computer storage medium includes instructions, and the instructions, when executed, cause a processor to execute the method described above.
  • the feature processing scheme of the present invention does not depend on a priori knowledge of human beings, and has strong universality.
  • the motion trajectory is converted into an image as an input.
  • the image-based feature modeling method of the present invention retains the original structure information of the motion trajectory and other hidden information that is difficult to describe with rules.
  • the processed features are suitable for various advanced image processing and deep learning algorithms (such as Convolutional Neural Networks CNN), which expands the scope of models for trajectory features.
  • Convolutional Neural Networks CNN Convolutional Neural Networks CNN
  • FIG. 1 shows a feature processing method of a motion track according to an embodiment of the present invention
  • FIG. 2 is a feature processing device showing a motion trajectory according to an embodiment of the present invention.
  • FIG. 1 illustrates a feature processing method 1000 of a motion track according to an embodiment of the present invention. As shown in FIG. 1, the method 1000 includes the following steps:
  • Step 110 The client collects a motion trajectory generated by a user behavior to obtain a series of ordered point sets, and each track point in the ordered point set includes a position element and a time element;
  • Step 120 use the information in the position element and the time element to convert the motion trajectory into an image
  • Step 130 Obtain one or more feature vectors in the motion trajectory by performing image processing on the image.
  • the term "movement trajectory generated by a user action” may include a movement trajectory generated by a mouse or a motion trajectory generated by touching the screen.
  • the motion trajectory samples collected by the client are represented by a series of ordered points, such as [[x 1 , y 1 , t 1 ], [x 2 , y 2 , t 2 ], ..., [x n , y n , t n ]], where the nth point is [x n , y n , t n ], x n is the abscissa, y n is the ordinate, and t n is time.
  • time is a processed relative value
  • ⁇ t is a sampling time interval.
  • the maximum and minimum trajectory coordinates are used to determine the image size. Then the image length is x max -x min and the width is y max -y min . The trajectory points are connected in the order of t 1 t 2 ... t n .
  • the maximum value of the ordinate of the trajectory is used to determine the image boundary.
  • the image boundary may be determined according to the resolution of the client, and the trajectory coordinate value depends on the setting of the coordinate system.
  • the user screen resolution is 1600 * 900, and the rows and columns of the pixel matrix should conform to the 4: 3 ratio.
  • the origin of the coordinate system is at the center of the screen, so the coordinate boundaries of the motion trajectory are X [-800,800], Y [-450,450].
  • the trajectory sampling density can be appropriately increased or decreased according to the verification accuracy requirements of different scenarios or the constraints of the client hardware equipment conditions.
  • using the information in the position element and the time element to convert the motion trajectory into an image includes: drawing according to the position element and the time element of each trajectory point in the ordered point set.
  • the motion trajectory ; and mapping the drawn motion trajectory to an image represented by at least one image channel value. For example, within a given coordinate axis area, the trajectory points are drawn according to the point coordinate positions of the motion trajectory sampling set, and the lines between the points are sequentially drawn in chronological order to draw the trajectory.
  • the simplest case is to construct a matrix of 0 and 1 pixels. The number of pixels passed by the track is 1. Otherwise, it is 0.
  • using the information in the position element and the time element to convert the motion trajectory into an image further includes: before mapping, smoothing and enhancing the drawn motion trajectory, thereby enriching features information. For example, consider using methods such as simple averaging, median filtering, and Gaussian smoothing to smooth and enhance the motion trajectory to enrich feature information. After processing, each motion trajectory is converted into a pixel matrix that stores the trajectory structural features and motion features. The matrix is suitable for various advanced algorithm models.
  • other information is added on the basis of the motion trajectory in the form of an image channel.
  • the speed and time of the track points in the horizontal and vertical directions are mapped to integers in the interval [0,255], and the information in each dimension represents an image channel value.
  • a single-channel image requires only one dimension of information per pixel, that is, a grayscale image, and a three-channel image is in RGB mode, that is, a color image.
  • the approximate velocity of the i-th point in the x direction is:
  • v yi y i + 1 -y i-1 / t i + 1 -t i-1
  • the speed and time in the x and y directions and time are used as the RGB channel information.
  • acceleration, time (a xi , a yi , t i ) in the x and y directions, or other combinations such as speed, acceleration, time (v i , a i , t i ) can be used to transform them After processing as RGB channel information.
  • the acceleration of the i-th point in the x direction is:
  • the acceleration of the i-th point in the y direction is:
  • the acceleration at the i-th point is:
  • the above-mentioned feature processing method 1000 may further include: performing human-machine recognition or verification according to the one or more feature vectors. In another embodiment, the above-mentioned feature processing method 1000 may further include: obtaining operating environment data; and performing human-machine identification or verification according to the one or more feature vectors and the operating environment data.
  • the operating environment data In the process of human-machine recognition and verification, in addition to behavior characteristics such as the motion trajectory of the user's operation, the operating environment data also has a certain value. Encode and map the attribute information such as the operating environment, and encode the category information into a numerical form as a supplementary feature of the sample. In one embodiment, a number is assigned to each category, such as the attribute operating system Windows 7: 0, Ubuntu: 1, ..., but this processing method sometimes sorts the category information invisibly. For example, in the above example, Ubuntu> Windows 7 may be considered.
  • one-hot coding method can be used to convert one attribute into n category features, where n is the number of categories. For example, assuming that the operating system has four categories: Windows7, Ubuntu, IOS, and Android, the corresponding encoding can be:
  • FIG. 2 is a feature processing device 2000 showing a motion trajectory according to an embodiment of the present invention.
  • the device 2000 includes a collection unit 210, a conversion unit 220, and a processing unit 230.
  • the collection unit 210 is configured to collect a motion trajectory generated by a user behavior to obtain a series of ordered point sets, and each track point in the ordered point set includes a position element and a time element.
  • the converting unit 220 is configured to use the information in the position element and the time element to convert the motion trajectory into an image.
  • the processing unit 230 is configured to obtain one or more feature vectors in the motion trajectory by performing image processing on the image.
  • the conversion unit 220 is configured to draw the motion trajectory according to a position element and a time element of each trajectory point in the ordered point set; and map the drawn motion trajectory to at least An image represented by an image channel value.
  • the conversion unit 220 is configured to draw trajectory points in a given coordinate axis area according to the position element, and draw a line between the trajectory points according to the time element order, thereby drawing The motion track.
  • the conversion unit 230 is configured to smooth and enhance the drawn motion trajectory before mapping, thereby enriching the feature information.
  • the conversion unit 220 is configured to extract the speed in the x direction, the speed in the y direction, and time of the drawn motion trajectory, and map and transform it to a value interval of [0,255]. To obtain the RGB image channel value used to represent the image. In yet another embodiment, the conversion unit is further configured to set all RGB image channel values of points where no motion trajectory passes to 0.
  • the above-mentioned device 2000 may further include a first recognition unit (not shown), configured to perform human-machine recognition or verification according to the one or more feature vectors.
  • the above-mentioned device 2000 may further include: an obtaining unit for obtaining operating environment data; and a second identifying unit for performing a person according to the one or more feature vectors and the operating environment data. Machine identification or verification.
  • the embodiments of the present invention may be provided as a method, a system, or a computer program product. Therefore, the present invention may take the form of hardware, software, or a combination of software and hardware. Moreover, the present invention may take the form of a computer program product implemented on one or more computer-usable storage media (including, but not limited to, magnetic disk memory, optical memory, etc.) containing computer-usable program code. For example, these computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other editable processing device, such that a sequence of instructions to perform a specified operation is generated.
  • the feature processing scheme of the present invention does not rely on prior knowledge of human beings, and has strong universality.
  • the motion trajectory is converted into an image as an input.
  • the image-based feature modeling method of the present invention retains the original structure information of the motion trajectory and other hidden information that is difficult to describe with rules.
  • the processed features are suitable for various advanced image processing and deep learning algorithms (such as Convolutional Neural Networks CNN), which expands the scope of models for trajectory features.
  • Convolutional Neural Networks CNN Convolutional Neural Networks CNN

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Social Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Psychiatry (AREA)
  • Image Analysis (AREA)

Abstract

一种运动轨迹的特征处理方法、设备以及计算机存储介质。该方法包括:客户端采集用户行为产生的运动轨迹以获取一串有序点集,所述有序点集中的每一个轨迹点包含位置要素和时间要素;利用所述位置要素和所述时间要素中的信息,将所述运动轨迹转换为图像;以及通过对所述图像进行图像处理,获得所述运动轨迹中的一个或多个特征向量。

Description

运动轨迹的特征处理方法、设备以及计算机存储介质 技术领域
本发明涉及计算机数据处理领域,特别地,涉及运动轨迹的特征处理方法、设备以及计算机存储介质。
背景技术
人类使用鼠标、拖动滑块等行为产生的运动轨迹具有固有的生物特征,区别于机器的自动操作,具有人机验证和风险识别的应用价值。
在现有技术中,特别是人机识别、滑动模块验证领域,当前的方法通常根据经验人为设定运动轨迹的特征提取规则,或是对运动轨迹做函数逼近与曲线拟合。这样的技术方案存在以下问题:(1)由于人工设定运动轨迹的特征提取规则,所以特征选取的好坏极大地依赖于人的先验知识,且规则与场景强相关,不具有普适性;(2)运动轨迹具有复杂的结构特征,现有方法使用运动学变量(速度,加速度,位移等)的平均数、方差等离散统计量只能片面的描述轨迹特征,丰富的隐含信息难以被表达,在建模过程中丢失;(3)函数逼近与曲线拟合的方法效果不稳定。如何选择适当的曲线类型难以界定,异常点对拟合造成的误差较大。并且运动轨迹不一定能用合适的函数抽象,十分容易出现过拟合现象。
以上公开于本发明背景部分的信息仅仅旨在增加对本发明的总体背景的理解,而不应当被视为承认或以任何形式暗示该信息构成已为本领域一般技术人员所公知的现有技术。
发明内容
为了解决现有技术中的一个或多个缺陷,根据本发明的一个方面,提供了一种运动轨迹的特征处理方法,所述方法包括:客户端采集用户行为产生的运动轨迹以获取一串有序点集,所述有序点集 中的每一个轨迹点包含位置要素和时间要素;利用所述位置要素和所述时间要素中的信息,将所述运动轨迹转换为图像;以及通过对所述图像进行图像处理,获得所述运动轨迹中的一个或多个特征向量。
在上述方法中,所述用户行为产生的运动轨迹包括鼠标运动轨迹或触摸屏幕产生的运动轨迹。
在上述方法中,所述有序点集中的第n个点以[x n,y n,t n]来进行表示,其中,x n为横坐标,y n为纵坐标,t n为时间。
在上述方法中,利用所述位置要素和所述时间要素中的信息,将所述运动轨迹转换为图像包括:根据所述有序点集中的每一个轨迹点的位置要素和时间要素,绘制所述运动轨迹;以及将所绘制出的运动轨迹映射到以至少一个图像通道值表示的图像。
在上述方法中,根据所述位置要素在给定坐标轴区域内绘制轨迹点,并按照所述时间要素顺序绘制所述轨迹点之间的连线,从而绘制所述运动轨迹。
在上述方法中,利用所述位置要素和所述时间要素中的信息,将所述运动轨迹转换为图像还包括:在映射之前,对所绘制出的运动轨迹进行平滑和增强,从而丰富特征信息。
在上述方法中,提取所绘制出的运动轨迹的x方向的速度、y方向的速度以及时间,并将其映射和变换到[0,255]的取值区间,从而获得用来表示图像的RGB图像通道值。
在上述方法中,将没有运动轨迹经过的点的RGB图像通道值都设为0。
上述方法还可包括:根据所述一个或多个特征向量来进行人机识别或验证。
上述方法还可包括:获得操作环境数据;以及根据所述一个或多个特征向量以及所述操作环境数据来进行人机识别或验证。
根据本发明的另一个方面,提供了一种运动轨迹的特征处理设备,所述设备包括:采集单元,用于采集用户行为产生的运动轨迹以获取一串有序点集,所述有序点集中的每一个轨迹点包含位置要 素和时间要素;转换单元,用于利用所述位置要素和所述时间要素中的信息,将所述运动轨迹转换为图像;以及处理单元,用于通过对所述图像进行图像处理,获得所述运动轨迹中的一个或多个特征向量。
在上述设备中,所述用户行为产生的运动轨迹包括鼠标运动轨迹或触摸屏幕产生的运动轨迹。
在上述设备中,所述有序点集中的第n个点以[x n,y n,t n]来进行表示,其中,x n为横坐标,y n为纵坐标,t n为时间。
在上述设备中,所述转换单元配置成根据所述有序点集中的每一个轨迹点的位置要素和时间要素,绘制所述运动轨迹;以及将所绘制出的运动轨迹映射到以至少一个图像通道值表示的图像。
在上述设备中,所述转换单元配置成根据所述位置要素在给定坐标轴区域内绘制轨迹点,并按照所述时间要素顺序绘制所述轨迹点之间的连线,从而绘制所述运动轨迹。
在上述设备中,所述转换单元配置成在映射之前,对所绘制出的运动轨迹进行平滑和增强,从而丰富特征信息。
在上述设备中,所述转换单元配置成提取所绘制出的运动轨迹的x方向的速度、y方向的速度以及时间,并将其映射和变换到[0,255]的取值区间,从而获得用来表示图像的RGB图像通道值。
在上述设备中,所述转换单元还配置成将没有运动轨迹经过的点的RGB图像通道值都设为0。
上述设备还可包括第一识别单元,用于根据所述一个或多个特征向量来进行人机识别或验证。
上述设备还可包括:获得单元,用于获得操作环境数据;以及第二识别单元,用于根据所述一个或多个特征向量以及所述操作环境数据来进行人机识别或验证。
根据本发明的又一个方面,提供了一种计算机存储介质,所述介质包括指令,所述指令在被执行时,使处理器执行如前所述的方法。
本发明的特征处理方案不依赖于人的先验知识,普适性强。将 运动轨迹转换成图像作为输入,无需人工设计运动特征的提取规则,例如提取抖动和位移等特征,避免了特征建模不当、考虑不全等问题。此外,本发明的基于图像的特征建模方法保留了运动轨迹的原始结构信息及其他难以用规则描述的隐含信息。除了传统的机器学习算法,处理后的特征适用于各类高级图像处理、深度学习算法(例如卷积神经网络CNN),扩大了轨迹特征的模型适用范围。而且,通过采用本发明的特征处理方案,在进行人机识别或验证时,攻击者难以发现规律,无法批量模拟出欺骗风控引擎的正常人类操作。
通过纳入本文的附图以及随后与附图一起用于说明本发明的某些原理的具体实施方式,本发明的方法和装置所具有的其它特征和优点将更为具体地变得清楚或得以阐明。
附图说明
图1是表示本发明的一个实施例的运动轨迹的特征处理方法;以及
图2是表示本发明的一个实施例的运动轨迹的特征处理设备。
具体实施方式
以下说明描述了本发明的特定实施方式以教导本领域技术人员如何制造和使用本发明的最佳模式。为了教导发明原理,已简化或省略了一些常规方面。本领域技术人员应该理解源自这些实施方式的变型将落在本发明的范围内。本领域技术人员应该理解下述特征能够以各种方式接合以形成本发明的多个变型。由此,本发明并不局限于下述特定实施方式,而仅由权利要求和它们的等同物限定。
图1示出了根据本发明的一个实施例的运动轨迹的特征处理方法1000。如图1所示,方法1000包括如下步骤:
步骤110:客户端采集用户行为产生的运动轨迹以获取一串有序点集,所述有序点集中的每一个轨迹点包含位置要素和时间要素;
步骤120:利用所述位置要素和所述时间要素中的信息,将所 述运动轨迹转换为图像;以及
步骤130:通过对所述图像进行图像处理,获得所述运动轨迹中的一个或多个特征向量。
在本发明的上下文中,术语“用户行为产生的运动轨迹”可包括鼠标运动轨迹或触摸屏幕产生的运动轨迹。
在一个实施例中,客户端采集的运动轨迹样本用一串有序点集表示,例如[[x 1,y 1,t 1],[x 2,y 2,t 2],...,[x n,y n,t n]],其中第n个点为[x n,y n,t n],x n为横坐标,y n为纵坐标,t n为时间。其中,时间是经过处理的相对值,t 1=0,t i=t i-1+Δt(i>1),Δt为采样的时间间隔。利用轨迹横纵坐标的最大值和最小值确定图像大小,那么图像长为x max-x min,宽为y max-y min,按照t 1t 2...t n的顺序连接各个轨迹点。
上述实施例中利用轨迹横纵坐标的最值确定图像边界。但在其他实施例中,可根据客户端分辨率大小确定图像边界,而轨迹坐标值取决于坐标系的设定。例如用户屏幕分辨率为1600*900,像素矩阵的行列应符合4:3比例。坐标系原点在屏幕中心,那么运动轨迹的坐标边界为X[-800,800],Y[-450,450]。另外,本领域技术人员容易理解的是,可根据不同场景的验证精度需求或客户端硬件设备条件限制,适当增加或降低轨迹采样密度。
在一个实施例中,利用所述位置要素和所述时间要素中的信息,将所述运动轨迹转换为图像包括:根据所述有序点集中的每一个轨迹点的位置要素和时间要素,绘制所述运动轨迹;以及将所绘制出的运动轨迹映射到以至少一个图像通道值表示的图像。例如,在给定坐标轴区域内,根据运动轨迹采样集的点坐标位置绘制轨迹点,并依次按照时间顺序绘制点之间的连线来绘制运动轨迹。最简单的情况即是构建一个0、1像素点矩阵,轨迹通过的像素点取值为1,否则为0。
在一个实施例中,利用所述位置要素和所述时间要素中的信息,将所述运动轨迹转换为图像还包括:在映射之前,对所绘制出的运动轨迹进行平滑和增强,从而丰富特征信息。例如,可考虑利用简 单平均、中值滤波、高斯平滑等方法对运动轨迹进行平滑和增强,从而丰富特征信息。处理过后,每条运动轨迹转换为一个存储了轨迹结构特征、运动特征的像素矩阵,该矩阵适用于各类高级算法模型。
在一个实施例中,以图像通道的方式,在运动轨迹的基础上加入其他信息。将轨迹点在水平和垂直方向的速度、时间映射到区间在[0,255]的整数,每个维度的信息表示一个图像通道值。根据需要建立单通道或多通道图像,单通道图像每个像素点只需一个维度的信息,即灰度图,而三通道图像为RGB模式,即彩色图。
例如,选择x方向的速度,y方向的速度,以及时间作为RGB通道信息。第i个点在x方向的速度近似值为:
v xi=x i+1-x i-1/t i+1-t i-1
同理,第i个点在y方向的速度近似值为:
v yi=y i+1-y i-1/t i+1-t i-1
使用min-max标准化方法将x方向的速度,y方向的速度,以及时间的取值范围映射到[0,1]区间,再将归一化后的数值乘以255,将取值范围变换至[0,255],从而得到各个轨迹点的RGB通道值。没有轨迹经过的点则设为R=G=B=0。
当然,本领域技术人员可以理解的是,可以采用其他信息作为通道信息。上述实施例中以x、y方向的速度,以及时间作为RGB通道信息。在其他实施例中,可使用x、y方向的加速度、时间(a xi,a yi,t i),或速度、加速度、时间(v i,a i,t i)等其他组合,将其变换处理后作为RGB通道信息。其中,第i个点在x方向的加速度为:
a xi=v xi-v x(i-1)/t i-t i-1
第i个点在y方向的加速度为:
a yi=v yi-v y(i-1)/t i-t i-1
第i个点的加速度为:
Figure PCTCN2019094475-appb-000001
另外,归一化方法除了min-max标准化算法外,也可使用Z-score等其他标准化方法。
在一个实施例中,上述特征处理方法1000还可包括:根据所述一个或多个特征向量来进行人机识别或验证。在另一个实施例中,上述特征处理方法1000还可包括:获得操作环境数据;以及根据所述一个或多个特征向量以及所述操作环境数据来进行人机识别或验证。
在人机识别、验证的过程中,除了用户操作的运动轨迹等行为特征,操作环境数据也具有一定价值。对操作环境等属性信息进行编码映射,将类别信息编码转换为数值形式,作为样本的补充特征。在一个实施例中,为每个类别指定一个数字,如属性操作系统Windows7:0,Ubuntu:1,…,但是这种处理方法有时会无形中给类别信息排序。如,上例中可能会认为Ubuntu>Windows7。
在一个实施例中,可采用one-hot编码方法,将一个属性转化成n个类别特征,n为类别数量。例如,假设操作系统有Windows7、Ubuntu、IOS、Android四种类别,则相应的编码可以为:
x1[Windows7]=1000,x2[Ubuntu]=0100,x3[IOS]=0010,x4[Android]=0001。
图2是表示本发明的一个实施例的运动轨迹的特征处理设备2000。如图2所示,设备2000包括采集单元210、转换单元220以及处理单元230。其中,采集单元210用于采集用户行为产生的运动轨迹以获取一串有序点集,所述有序点集中的每一个轨迹点包含位置要素和时间要素。转换单元220用于利用所述位置要素和所述时间要素中的信息,将所述运动轨迹转换为图像。处理单元230用于通过对所述图像进行图像处理,获得所述运动轨迹中的一个或多个特征向量。
在一个实施例中,所述转换单元220配置成根据所述有序点集中的每一个轨迹点的位置要素和时间要素,绘制所述运动轨迹;以 及将所绘制出的运动轨迹映射到以至少一个图像通道值表示的图像。在另一个实施例中,所述转换单元220配置成根据所述位置要素在给定坐标轴区域内绘制轨迹点,并按照所述时间要素顺序绘制所述轨迹点之间的连线,从而绘制所述运动轨迹。在又一个实施例中,所述转换单元230配置成在映射之前,对所绘制出的运动轨迹进行平滑和增强,从而丰富特征信息。在又一个实施例中,所述转换单元220配置成提取所绘制出的运动轨迹的x方向的速度、y方向的速度以及时间,并将其映射和变换到[0,255]的取值区间,从而获得用来表示图像的RGB图像通道值。在又一个实施例中,所述转换单元还配置成将没有运动轨迹经过的点的RGB图像通道值都设为0。
在一个实施例中,上述设备2000还可包括第一识别单元(未示出),用于根据所述一个或多个特征向量来进行人机识别或验证。在另一个实施例中,上述设备2000还可包括:获得单元,用于获得操作环境数据;以及第二识别单元,用于根据所述一个或多个特征向量以及所述操作环境数据来进行人机识别或验证。
需要指出的是,前运动轨迹的特征处理方法和设备以人机识别或验证为应用场景进行了具体描述。本领域技术人员可以理解,上述方法和设备可在不经过实质性改变的基础上适用到其他人机互动场景。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用硬件、软件、或软硬件结合的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。例如,可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编辑处理设备的处理器,使得产生执行指定操作的指令序列。
综上,本发明的特征处理方案不依赖于人的先验知识,普适性强。将运动轨迹转换成图像作为输入,无需人工设计运动特征的提 取规则,例如提取抖动和位移等特征,避免了特征建模不当、考虑不全等问题。此外,本发明的基于图像的特征建模方法保留了运动轨迹的原始结构信息及其他难以用规则描述的隐含信息。除了传统的机器学习算法,处理后的特征适用于各类高级图像处理、深度学习算法(例如卷积神经网络CNN),扩大了轨迹特征的模型适用范围。而且,通过采用本发明的特征处理方案,在进行人机识别或验证时,攻击者难以发现规律,无法批量模拟出欺骗风控引擎的正常人类操作。
以上例子主要说明了本发明的运动轨迹的特征处理方法、设备以及计算机存储介质。尽管只对其中一些本发明的具体实施方式进行了描述,但是本领域普通技术人员应当了解,本发明可以在不偏离其主旨与范围内以许多其他的形式实施。因此,所展示的例子与实施方式被视为示意性的而非限制性的,在不脱离如所附各权利要求所定义的本发明精神及范围的情况下,本发明可能涵盖各种的修改与替换。

Claims (21)

  1. 一种运动轨迹的特征处理方法,其特征在于,所述方法包括:
    客户端采集用户行为产生的运动轨迹以获取一串有序点集,所述有序点集中的每一个轨迹点包含位置要素和时间要素;
    利用所述位置要素和所述时间要素中的信息,将所述运动轨迹转换为图像;以及
    通过对所述图像进行图像处理,获得所述运动轨迹中的一个或多个特征向量。
  2. 如权利要求1所述的方法,其中,所述用户行为产生的运动轨迹包括鼠标运动轨迹或触摸屏幕产生的运动轨迹。
  3. 如权利要求1所述的方法,其中,所述有序点集中的第n个点以[x n,y n,t n]来进行表示,其中,x n为横坐标,y n为纵坐标,t n为时间。
  4. 如权利要求1所述的方法,其中,利用所述位置要素和所述时间要素中的信息,将所述运动轨迹转换为图像包括:
    根据所述有序点集中的每一个轨迹点的位置要素和时间要素,绘制所述运动轨迹;以及
    将所绘制出的运动轨迹映射到以至少一个图像通道值表示的图像。
  5. 如权利要求4所述的方法,其中,根据所述位置要素在给定坐标轴区域内绘制轨迹点,并按照所述时间要素顺序绘制所述轨迹点之间的连线,从而绘制所述运动轨迹。
  6. 如权利要求4所述的方法,其中,利用所述位置要素和所述时间要素中的信息,将所述运动轨迹转换为图像还包括:
    在映射之前,对所绘制出的运动轨迹进行平滑和增强,从而丰富特征信息。
  7. 如权利要求4所述的方法,其中,提取所绘制出的运动轨迹的x方向的速度、y方向的速度以及时间,并将其映射和变换到[0,255]的取值区间,从而获得用来表示图像的RGB图像通道值。
  8. 如权利要求7所述的方法,其中,将没有运动轨迹经过的点的RGB图像通道值都设为0。
  9. 如权利要求1所述的方法,还包括:
    根据所述一个或多个特征向量来进行人机识别或验证。
  10. 如权利要求1所述的方法,还包括:
    获得操作环境数据;以及
    根据所述一个或多个特征向量以及所述操作环境数据来进行人机识别或验证。
  11. 一种运动轨迹的特征处理设备,其特征在于,所述设备包括:
    采集单元,用于采集用户行为产生的运动轨迹以获取一串有序点集,所述有序点集中的每一个轨迹点包含位置要素和时间要素;
    转换单元,用于利用所述位置要素和所述时间要素中的信息,将所述运动轨迹转换为图像;以及
    处理单元,用于通过对所述图像进行图像处理,获得所述运动轨迹中的一个或多个特征向量。
  12. 如权利要求11所述的设备,其中,所述用户行为产生的运动轨迹包括鼠标运动轨迹或触摸屏幕产生的运动轨迹。
  13. 如权利要求11所述的设备,其中,所述有序点集中的第n个点以[x n,y n,t n]来进行表示,其中,x n为横坐标,y n为纵坐标,t n为时间。
  14. 如权利要求11所述的设备,其中,所述转换单元配置成根据所述有序点集中的每一个轨迹点的位置要素和时间要素,绘制所述运动轨迹;以及将所绘制出的运动轨迹映射到以至少一个图像通道值表示的图像。
  15. 如权利要求14所述的设备,其中,所述转换单元配置成根据所述位置要素在给定坐标轴区域内绘制轨迹点,并按照所述时间要素顺序绘制所述轨迹点之间的连线,从而绘制所述运动轨迹。
  16. 如权利要求14所述的设备,其中,所述转换单元配置成在映射之前,对所绘制出的运动轨迹进行平滑和增强,从而丰富特征 信息。
  17. 如权利要求14所述的设备,其中,所述转换单元配置成提取所绘制出的运动轨迹的x方向的速度、y方向的速度以及时间,并将其映射和变换到[0,255]的取值区间,从而获得用来表示图像的RGB图像通道值。
  18. 如权利要求17所述的设备,其中,所述转换单元还配置成将没有运动轨迹经过的点的RGB图像通道值都设为0。
  19. 如权利要求11所述的设备,还包括:
    第一识别单元,用于根据所述一个或多个特征向量来进行人机识别或验证。
  20. 如权利要求11所述的设备,还包括:
    获得单元,用于获得操作环境数据;以及
    第二识别单元,用于根据所述一个或多个特征向量以及所述操作环境数据来进行人机识别或验证。
  21. 一种计算机存储介质,所述介质包括指令,所述指令在被执行时,使处理器执行如权利要求1至10中任一项所述的方法。
PCT/CN2019/094475 2018-07-11 2019-07-03 运动轨迹的特征处理方法、设备以及计算机存储介质 WO2020011069A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US17/258,665 US11222431B2 (en) 2018-07-11 2019-07-03 Feature processing method and device for motion trajectory, and computer storage medium
KR1020207032005A KR102343849B1 (ko) 2018-07-11 2019-07-03 모션 궤적의 특징 처리 방법, 장치 및 컴퓨터 저장 매체
EP19834148.9A EP3822830B1 (en) 2018-07-11 2019-07-03 Feature processing method and device for motion trajectory, and computer storage medium
JP2020554890A JP7096904B2 (ja) 2018-07-11 2019-07-03 運動軌跡の特徴処理方法、装置、および、コンピュータ記憶媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810756730.2 2018-07-11
CN201810756730.2A CN110717154A (zh) 2018-07-11 2018-07-11 运动轨迹的特征处理方法、设备以及计算机存储介质

Publications (1)

Publication Number Publication Date
WO2020011069A1 true WO2020011069A1 (zh) 2020-01-16

Family

ID=69142026

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/094475 WO2020011069A1 (zh) 2018-07-11 2019-07-03 运动轨迹的特征处理方法、设备以及计算机存储介质

Country Status (6)

Country Link
US (1) US11222431B2 (zh)
EP (1) EP3822830B1 (zh)
JP (1) JP7096904B2 (zh)
KR (1) KR102343849B1 (zh)
CN (1) CN110717154A (zh)
WO (1) WO2020011069A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364722A (zh) * 2020-10-23 2021-02-12 岭东核电有限公司 核电作业人员监控处理方法、装置和计算机设备

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960139A (zh) * 2018-07-03 2018-12-07 百度在线网络技术(北京)有限公司 人物行为识别方法、装置及存储介质
CN111862144A (zh) * 2020-07-01 2020-10-30 睿视智觉(厦门)科技有限公司 一种确定物体移动轨迹分数的方法及装置
CN112686941B (zh) * 2020-12-24 2023-09-19 北京英泰智科技股份有限公司 一种车辆运动轨迹合理性识别方法、装置及电子设备
CN113658225A (zh) * 2021-08-19 2021-11-16 天之翼(苏州)科技有限公司 基于航拍监控的运动对象识别方法及系统
CN115392407B (zh) * 2022-10-28 2023-03-24 中建五局第三建设有限公司 基于无监督学习的危险源预警方法、装置、设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130057516A1 (en) * 2011-09-07 2013-03-07 Chih-Hung Lu Optical touch-control system with track detecting function and method thereof
CN103093183A (zh) * 2011-10-27 2013-05-08 索尼公司 分类器生成装置和方法、视频检测装置和方法及视频监控系统
US20160259483A1 (en) * 2012-03-28 2016-09-08 Amazon Technologies, Inc. Integrated near field sensor for display devices
CN106569613A (zh) * 2016-11-14 2017-04-19 中国电子科技集团公司第二十八研究所 一种多模态人机交互系统及其控制方法
CN107133511A (zh) * 2017-04-28 2017-09-05 成都新橙北斗智联有限公司 一种滑动验证的验证方法及装置
CN107463878A (zh) * 2017-07-05 2017-12-12 成都数联铭品科技有限公司 基于深度学习的人体行为识别系统

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5281957A (en) * 1984-11-14 1994-01-25 Schoolman Scientific Corp. Portable computer and head mounted display
US5298919A (en) * 1991-08-02 1994-03-29 Multipoint Technology Corporation Multi-dimensional input device
JP2009020897A (ja) * 2002-09-26 2009-01-29 Toshiba Corp 画像解析方法、画像解析装置、画像解析プログラム
JP2007334467A (ja) * 2006-06-13 2007-12-27 Cyber Sign Japan Inc 手書署名認証方法
US7971543B2 (en) * 2007-03-06 2011-07-05 Brother Kogyo Kabushiki Kaisha Sewing machine and computer-readable recording medium storing sewing machine operation program
WO2009028221A1 (ja) * 2007-08-31 2009-03-05 Tokyo Metropolitan Organization For Medical Research 定量的運動機能評価システム
KR101053411B1 (ko) 2009-02-04 2011-08-01 허성민 문자 입력 방법 및 그 단말기
US9280281B2 (en) * 2012-09-12 2016-03-08 Insyde Software Corp. System and method for providing gesture-based user identification
CN104463084A (zh) 2013-09-24 2015-03-25 江南大学 一种基于非负矩阵分解的离线手写签名识别
JP2015135537A (ja) 2014-01-16 2015-07-27 株式会社リコー 座標検出システム、情報処理装置、座標検出方法およびプログラム
EP2911089B1 (en) 2014-02-25 2018-04-04 Karlsruher Institut für Technologie Method and system for handwriting and gesture recognition
JP6464504B6 (ja) 2014-10-01 2019-03-13 Dynabook株式会社 電子機器、処理方法およびプログラム
DK3158553T3 (en) * 2015-03-31 2019-03-18 Sz Dji Technology Co Ltd Authentication systems and methods for identifying authorized participants
JP2017004298A (ja) 2015-06-11 2017-01-05 シャープ株式会社 文字入力システム、文字入力方法、及び、コンピュータプログラム
KR101585842B1 (ko) 2015-10-05 2016-01-15 주식회사 시큐브 세그먼트 블록 기반 수기서명 인증 시스템 및 방법
CN108121906A (zh) * 2016-11-28 2018-06-05 阿里巴巴集团控股有限公司 一种验证方法、装置以及计算设备
CN107038462B (zh) * 2017-04-14 2020-12-15 广州机智云物联网科技有限公司 设备控制操作方法及系统
CN107507286B (zh) * 2017-08-02 2020-09-29 五邑大学 一种基于人脸和手写签名的双模态生物特征签到系统
CN107679374B (zh) * 2017-08-23 2019-03-15 北京三快在线科技有限公司 一种基于滑动轨迹的人机识别方法及装置,电子设备
CN108229130B (zh) * 2018-01-30 2021-04-16 中国银联股份有限公司 一种验证方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130057516A1 (en) * 2011-09-07 2013-03-07 Chih-Hung Lu Optical touch-control system with track detecting function and method thereof
CN103093183A (zh) * 2011-10-27 2013-05-08 索尼公司 分类器生成装置和方法、视频检测装置和方法及视频监控系统
US20160259483A1 (en) * 2012-03-28 2016-09-08 Amazon Technologies, Inc. Integrated near field sensor for display devices
CN106569613A (zh) * 2016-11-14 2017-04-19 中国电子科技集团公司第二十八研究所 一种多模态人机交互系统及其控制方法
CN107133511A (zh) * 2017-04-28 2017-09-05 成都新橙北斗智联有限公司 一种滑动验证的验证方法及装置
CN107463878A (zh) * 2017-07-05 2017-12-12 成都数联铭品科技有限公司 基于深度学习的人体行为识别系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364722A (zh) * 2020-10-23 2021-02-12 岭东核电有限公司 核电作业人员监控处理方法、装置和计算机设备

Also Published As

Publication number Publication date
EP3822830A4 (en) 2022-04-06
US11222431B2 (en) 2022-01-11
US20210248760A1 (en) 2021-08-12
KR20200140869A (ko) 2020-12-16
JP7096904B2 (ja) 2022-07-06
KR102343849B1 (ko) 2021-12-27
JP2021512432A (ja) 2021-05-13
EP3822830A1 (en) 2021-05-19
CN110717154A (zh) 2020-01-21
EP3822830B1 (en) 2023-08-30

Similar Documents

Publication Publication Date Title
WO2020011069A1 (zh) 运动轨迹的特征处理方法、设备以及计算机存储介质
US11416710B2 (en) Feature representation device, feature representation method, and program
CN113674140B (zh) 一种物理对抗样本生成方法及系统
CN107103326A (zh) 基于超像素聚类的协同显著性检测方法
Premaratne et al. Centroid tracking based dynamic hand gesture recognition using discrete Hidden Markov Models
CN113378770B (zh) 手势识别方法、装置、设备、存储介质
CN108021869A (zh) 一种结合高斯核函数的卷积神经网络跟踪方法
CN110032932B (zh) 一种基于视频处理和决策树设定阈值的人体姿态识别方法
CN109858327B (zh) 一种基于深度学习的字符分割方法
WO2015131468A1 (en) Method and system for estimating fingerprint pose
CN111523447A (zh) 车辆跟踪方法、装置、电子设备及存储介质
CN111652017A (zh) 一种动态手势识别方法及系统
DE112021006507T5 (de) Räumlich-zeitliches tiefes lernen für die verhaltensbiometrie
CN113989556A (zh) 一种小样本医学影像分类方法和系统
WO2021027329A1 (zh) 基于图像识别的信息推送方法、装置、及计算机设备
CN112613496A (zh) 一种行人重识别方法、装置、电子设备及存储介质
CN113544735A (zh) 人认证设备、控制方法和程序
CN112508966B (zh) 一种交互式图像分割方法及系统
US20230230277A1 (en) Object position estimation device, object position estimation method, and recording medium
CN108830166B (zh) 一种公交车客流量实时统计方法
CN113505729A (zh) 基于人体面部运动单元的面试作弊检测方法及系统
CN112084867A (zh) 一种基于人体骨架点距离的行人定位跟踪方法
KR102513456B1 (ko) 그림파일 형태의 그래프 데이터 인식 및 추출하는 시스템 및 그 방법
KR20190093726A (ko) 영상으로부터 손을 검출하는 장치, 방법 및 컴퓨터 프로그램
CN117372286B (zh) 一种基于Python的图像噪声优化方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19834148

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020554890

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20207032005

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE