CN109191496B - Motion prediction method based on shape matching - Google Patents

Motion prediction method based on shape matching Download PDF

Info

Publication number
CN109191496B
CN109191496B CN201810868874.7A CN201810868874A CN109191496B CN 109191496 B CN109191496 B CN 109191496B CN 201810868874 A CN201810868874 A CN 201810868874A CN 109191496 B CN109191496 B CN 109191496B
Authority
CN
China
Prior art keywords
value
image
prediction
convolution sum
template image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810868874.7A
Other languages
Chinese (zh)
Other versions
CN109191496A (en
Inventor
肖东晋
张立群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alva Beijing Technology Co ltd
Original Assignee
Alva Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alva Beijing Technology Co ltd filed Critical Alva Beijing Technology Co ltd
Priority to CN201810868874.7A priority Critical patent/CN109191496B/en
Publication of CN109191496A publication Critical patent/CN109191496A/en
Application granted granted Critical
Publication of CN109191496B publication Critical patent/CN109191496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

One embodiment of the present invention provides a motion prediction method based on shape matching, including: generating a shape description of the template image; generating a decision value description of the target image; and scanning in the judgment value description of the target image by using the shape description of the template image, and selecting the direction of the maximum sum value in the shape matching calculation result as a target motion prediction range.

Description

Motion prediction method based on shape matching
Technical Field
The invention relates to the field of computers, in particular to a motion prediction method based on shape matching.
Background
Target tracking prediction is one of the most active research directions in the field of computer vision in recent years, and comprises detecting, classifying, identifying, tracking and understanding and describing behaviors of targets from image sequences of the targets, and belongs to the field of image analysis and understanding. From the technical point of view, the research content of target tracking is quite rich, and the research content mainly relates to subject knowledge such as pattern recognition, image processing, computer vision, artificial intelligence and the like; meanwhile, the problems of rapid segmentation of motion in a dynamic scene, non-rigid motion of a target, processing of self-shielding of the target and mutual shielding between the targets and the like also bring certain challenges to target tracking research. The target tracking algorithm classified according to different features extracted from the tracked target mainly includes a region-based target tracking algorithm, a feature-based tracking algorithm, an active contour-based tracking algorithm, and a Mean shift-based tracking algorithm.
The target tracking algorithm has different classification standards, and each algorithm has its own basic idea, advantages and disadvantages. Different algorithms have different advantages and disadvantages, and the applicable occasions are different. Therefore, research on complementarity of some advantages and disadvantages of different tracking algorithms also becomes a hot spot of the current research. A plurality of researchers at home and abroad comprehensively apply knowledge of various disciplines and boldly try to carry out the research of the moving target tracking algorithm.
At present, a series of mature algorithms are formed by traditional target tracking algorithms through years of research, the tracking effect and speed of the tracking algorithms on non-high-frequency images are good, and the target tracking prediction algorithm is mainly developed in the direction of improving the tracking speed and improving the tracking effect under the influence of external conditions such as illumination change, shielding, non-rigid motion, background disorder and the like. And for the problems of low recognition rate of character images which are mainly displayed at high frequency and change rapidly and poor tracking effect, few researches are carried out, and no proper tracking prediction method is provided, so that the application requirements in the aspect can not be met.
Disclosure of Invention
To solve the problems in the prior art, an embodiment of the present invention provides a motion prediction method based on shape matching, including:
generating a shape description of the template image;
generating a decision value description of the target image;
and scanning in the judgment value description of the target image by using the shape description of the template image, and selecting the direction of the maximum sum value in the shape matching calculation result as a target motion prediction range.
In one embodiment of the invention, generating the shape description of the template image comprises:
generating a black plug matrix of each pixel point of the template image;
generating a decision value for each pixel based on a blackplug matrix for each pixel of the template image;
and sampling and marking the judgment value of the template image according to a certain step length, and forming the shape description of the whole template image by using the sampling points.
In one embodiment of the invention, the template image is gaussian filtered before the blackplug matrix is generated.
In one embodiment of the present invention, the determination value of each pixel is a determinant of a blackout matrix, an eigenvalue of a blackout matrix, or a discriminant of a blackout matrix.
In one embodiment of the present invention, sampling and marking are performed on the decision value of the template image according to a certain step length, and the shape description of the whole template image by using the sampling points comprises:
and when the absolute value of the sampling point is greater than the threshold value, storing the information of the sampling point, wherein a set consisting of adjacent points with the same sign forms a shape description.
In one embodiment of the present invention, the description of the blackout determination value of the target image is the same as the calculation method of the blackout determination value of the template image.
In one embodiment of the present invention, scanning in the determination value description of the target image using the shape description of the template image includes:
determining a primary prediction central point described by a judgment value of a target image;
and respectively calculating the convolution sum of the shape description of the template image and the judgment value of the test image in the four directions at the positions of the four directions of which the central point offset described by the judgment value of the target image is T, wherein the direction with the maximum convolution sum is the range of the predicted motion.
In one embodiment of the present invention, the scanning in the determination value description of the target image using the shape description of the template image further comprises:
taking the position of the maximum value of the convolution sum determined by the previous prediction as the current predicted central point, respectively calculating the convolution sum of the shape description of the template image and the judgment value of the test image in four directions at the positions of the four directions with the current predicted central point offset being T;
judging whether the maximum value of the convolution sum calculated currently is larger than the maximum value of the convolution sum predicted last time;
if the maximum value of the convolution sum calculated currently is larger than the convolution sum maximum value predicted last time, changing the current prediction into the previous prediction, repeatedly taking the position of the maximum value of the convolution sum determined by the previous prediction as the central point of the current prediction, respectively calculating the convolution sum of the shape description of the template image and the judgment value of the test image in four directions at the positions of the four directions of which the offsets of the central point of the current prediction are T; judging whether the maximum value of the convolution sum calculated currently is larger than the maximum value of the convolution sum predicted last time;
and when the maximum value of the convolution sum calculated at present is smaller than the convolution sum maximum value of the previous prediction, the position of the convolution sum maximum value of the previous prediction is the final position of the image prediction.
In one embodiment of the invention, the offset T is less than 100 pixels.
The motion prediction method based on the shape matching can realize large-range prediction with small calculation cost.
Drawings
To further clarify the above and other advantages and features of embodiments of the present invention, a more particular description of embodiments of the invention will be rendered by reference to the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. In the drawings, the same or corresponding parts will be denoted by the same or similar reference numerals for clarity.
Fig. 1 illustrates a flowchart of an image motion prediction method according to an embodiment of the present invention.
FIG. 2 illustrates an exemplary diagram of a shape description of a template image according to one embodiment of the invention.
Fig. 3 shows a schematic diagram of a process for object motion prediction according to an embodiment of the invention.
Detailed Description
In the following description, the invention is described with reference to various embodiments. One skilled in the relevant art will recognize, however, that the embodiments may be practiced without one or more of the specific details, or with other alternative and/or additional methods, materials, or components. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of embodiments of the invention. Similarly, for purposes of explanation, specific numbers, materials and configurations are set forth in order to provide a thorough understanding of the embodiments of the invention. However, the invention may be practiced without specific details. Further, it should be understood that the embodiments shown in the figures are illustrative representations and are not necessarily drawn to scale.
Reference in the specification to "one embodiment" or "the embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
In the following description, the invention is described with reference to various embodiments. One skilled in the relevant art will recognize, however, that the embodiments may be practiced without one or more of the specific details, or with other alternative and/or additional methods, materials, or components. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of embodiments of the invention. Similarly, for purposes of explanation, specific numbers, materials and configurations are set forth in order to provide a thorough understanding of the embodiments of the invention. However, the invention may be practiced without specific details. Further, it should be understood that the embodiments shown in the figures are illustrative representations and are not necessarily drawn to scale.
Reference in the specification to "one embodiment" or "the embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
The invention provides a brand-new method for predicting a high-frequency image in a large range, and motion prediction is realized through shape matching. In a general tracking algorithm, a Hessian (black plug) matrix is used for realizing prediction tracking by selecting a maximum value or a minimum value as a characteristic point, and the method is characterized in that a Hessian matrix determinant is sampled according to a certain step length to generate a shape, then a shape matching method is used for calculation, and finally the movement direction of the maximum value is selected as a prediction range, so that large-range prediction is realized at low calculation cost.
The invention processes the image with more characters mainly displayed by high frequency through the Hessian matrix to achieve the effect of removing high frequency, then performs shape matching and prediction to realize the large-range prediction of the high frequency image and solve the defects of the existing matching tracking method in the aspect of high frequency anti-interference.
The high frequency information may include text. It should be noted here that, in the present invention, the frequency of an image is an index representing the intensity of change in the gray scale in the image, and is the gradient of the gray scale in a plane space; then, the high frequency image is an image that characterizes sharp gray scale changes, large gradients of gray scale in planar space, and/or sharp edges. For example, the high frequency information is a text, a high speed moving scene, or a high speed moving object, etc., depending on the application scene.
Fig. 1 illustrates a flowchart of an image motion prediction method according to an embodiment of the present invention.
First, in step 110, a shape description of the template image is generated. In an embodiment of the present invention, the shape description of the template image may be generated by a Hessian matrix.
The Hessian matrix is a square matrix formed by second-order partial derivatives of a multivariate function and describes the local curvature of the function. For each pixel point, the Hessian matrix is as follows:
Figure BDA0001751609150000051
before constructing the Hessian matrix, Gaussian filtering can be carried out on the image to remove pixel mutation caused by noise, and the filtered Hessian matrix can be expressed as follows:
Figure BDA0001751609150000052
and generating a judgment value of each pixel based on the Hessian matrix of each pixel of the template image, wherein the judgment value can be a determinant of the Hessian matrix, an eigenvalue of the Hessian matrix or a discriminant of the Hessian matrix, and the like.
And sampling and marking the judgment value of the template image according to a certain step length, and forming the shape description of the whole template image by using the sampling points.
When the absolute value of the sampling point of the template image is greater than the threshold value, the point values and the coordinate information are stored, and because the values are divided into positive and negative, the shape description is formed by a set of adjacent points with the same sign, namely the shape description is formed according to the light and shade information. FIG. 2 illustrates an exemplary diagram of a shape description of a template image according to one embodiment of the invention. As shown in fig. 2, when the template graph 210 has the same size as the original template image and the absolute value of the sampling point is greater than the threshold, the information of the sampling point is stored, and the set of adjacent points with the same sign forms the shape description 220. The values of the remaining points in the template map 210 are discarded. In other words, the values of the remaining points may be regarded as 0.
In step 120, a Hessian determination value for each pixel of the target image is generated, and the Hessian determination value for each pixel is described as the determination value of the target image. In the embodiment of the invention, the Hessian determination value of the target image and the Hessian determination value of the template image are calculated in the same manner. The Hessian determination value of the target image may be a determinant of a Hessian matrix of each pixel of the target image, a feature value of the Hessian matrix, a discriminant of the Hessian matrix, or the like.
In step 130, the shape description of the template image is used to scan in the decision value description of the target image. And selecting the direction of the maximum sum value in the shape matching calculation result as the target motion prediction range.
In the embodiment of the invention, in order to simplify scanning calculation, a certain point described by a judgment value of a target image is taken as a central point, a central point of initial prediction can be the central point described by the judgment value of the target image, the central points of template images are respectively positioned in four directions with offset being T, the convolution sum of the shape description of the template images and the judgment value of a test image in the four directions is calculated, the direction with the maximum sum value is taken as a prediction motion range, and the next prediction is carried out based on the current prediction range. If the next prediction needs to be continued, the next prediction is the last prediction, and the last position is the final position of the image prediction of the frame when the maximum sum of the operations is smaller than the sum of the operations.
The convolution calculation can be regarded as a weighted summation process, using each pixel in the image region to be multiplied by each element of the convolution kernel (i.e., the weight matrix), and the sum of all products is used as the new value of the pixel in the center of the region.
Convolution calculation of the pixel region R of 3 × 3 with the convolution kernel G:
assuming that R is a 3 × 3 pixel region, the convolution kernel is G:
Figure BDA0001751609150000061
convolution sum ═ R1G1+ R2G2+ R3G3+ R4G4+ R5G5+ R6G6+ R7G7+ R8G8+ R9G 9.
In the convolution and calculation process of the shape description of the template image and the determination value of the test image, the shape description of the template image can be regarded as a sparse convolution kernel, and only the product sum of the sampling point with the value in the shape description of the template image and the determination value of the corresponding coordinate position of the test image is calculated.
In the process of predicting the image of one frame, since the offset T is small, for example, the offset T is less than 100 pixels, the maximum sum cannot be obtained by one-time prediction, and therefore, the maximum sum of the image prediction of the frame needs to be obtained through multiple iterative predictions.
In other embodiments of the invention, different scanning strategies may be employed. For example, the initial point of the scan calculation is first determined, and then the scan is performed pixel by pixel. Alternatively, a larger range is predetermined with a larger scan offset and then the scan range is gradually reduced.
Fig. 3 shows a schematic diagram of a process for object motion prediction according to an embodiment of the invention. In fig. 3, first, a certain point P1 described by the determination value of the target image is a center point of the initial prediction, and P2 is an initial prediction range and a next prediction center point. Part a in fig. 3 shows an overlapping portion of the determination value description of the target image and the template image. Part B of fig. 3 shows a schematic diagram of the offset of the first step of the prediction. The images marked by the dashed boxes are all template images (including shape description, not marked in the figure), x1, x2, x3 and x4 of the marks are four directions with x0 as the center and offset as T, and x1, x2, x3 and x4 are the center points of the corresponding template images. Section C of fig. 3 shows a schematic diagram of predicting the offset of the second step. The markers x5, x6, x7, and x8 are four directions centered at x3 and offset by T, and x5, x6, x7, and x8 are the center points of the corresponding template images.
As shown in fig. 3, with the test chart x0 as the center point, if the direction sum of x3 is the largest in the calculation of the four directions x1, x2, x3, and x4, the direction of x3 is the predicted range of this calculation, and the convolution sum of the four directions x5, x6, x7, and x8 with the offset T is calculated continuously with the position of x3 as the center in the next prediction. And repeating the calculation process for multiple times until the maximum convolution sum value of the calculation is smaller than the sum value of the last calculation, wherein the last position is the predicted final position of the image of the frame.
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various combinations, modifications, and changes can be made thereto without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention disclosed herein should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims (6)

1. A method of motion prediction based on shape matching, comprising:
generating a shape description of a template image, comprising: generating a black plug matrix of each pixel point of the template image; generating a decision value for each pixel based on a blackplug matrix for each pixel of the template image; sampling and marking the judgment value of the template image according to a certain step length, and forming shape description of the whole template image by using the sampling points, wherein when the absolute value of the sampling point is greater than a threshold value, the information of the sampling point is stored, and a set consisting of adjacent points with the same sign forms the shape description;
generating a decision value description of a target image, comprising: generating a black plug judgment value of each pixel of the target image, and describing the black plug judgment value of each pixel as the judgment value of the target image;
scanning in the judgment value description of the target image by using the shape description of the template image, and selecting the direction of the maximum sum value in the shape matching calculation result as a target motion prediction range, wherein the method comprises the following steps: determining a primary prediction central point described by a judgment value of a target image; and respectively calculating the convolution sum of the shape description of the template image and the judgment value of the target image in the four directions at the positions of the four directions of which the central point offset described by the judgment value of the target image is T, wherein the direction with the maximum convolution sum is the range of the predicted motion.
2. The shape matching based motion prediction method of claim 1, wherein the template image is gaussian filtered before generating the blackplug matrix.
3. The shape matching-based motion prediction method according to claim 1, wherein the decision value of each pixel is a determinant of a blackout matrix, an eigenvalue of the blackout matrix, or a discriminant of the blackout matrix.
4. The shape matching-based motion prediction method according to claim 1, wherein the description of the blackout determination value of the target image is the same as the calculation method of the blackout determination value of the template image.
5. The method of motion prediction based on shape matching as claimed in claim 1, wherein the scanning in the decision value description of the target image using the shape description of the template image further comprises:
taking the position of the maximum value of the convolution sum determined by the previous prediction as the current predicted central point, respectively calculating the convolution sum of the shape description of the template image and the judgment value of the target image in four directions at the positions of the four directions with the current predicted central point offset being T;
judging whether the maximum value of the convolution sum calculated currently is larger than the maximum value of the convolution sum predicted last time;
if the maximum value of the convolution sum calculated currently is larger than the convolution sum maximum value predicted last time, changing the current prediction into the previous prediction, repeatedly taking the position of the maximum value of the convolution sum determined by the previous prediction as the central point of the current prediction, respectively calculating the convolution sum of the shape description of the template image and the judgment value of the target image in four directions at the positions of the four directions of which the offsets of the central point of the current prediction are T; judging whether the maximum value of the convolution sum calculated currently is larger than the maximum value of the convolution sum predicted last time;
and if the maximum value of the convolution sum calculated currently is smaller than the convolution sum maximum value predicted last time, the position of the convolution sum maximum value predicted last time is the final position of the image prediction.
6. The method of shape matching based motion prediction according to claim 1, wherein the offset T is less than 100 pixels.
CN201810868874.7A 2018-08-02 2018-08-02 Motion prediction method based on shape matching Active CN109191496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810868874.7A CN109191496B (en) 2018-08-02 2018-08-02 Motion prediction method based on shape matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810868874.7A CN109191496B (en) 2018-08-02 2018-08-02 Motion prediction method based on shape matching

Publications (2)

Publication Number Publication Date
CN109191496A CN109191496A (en) 2019-01-11
CN109191496B true CN109191496B (en) 2020-10-02

Family

ID=64920438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810868874.7A Active CN109191496B (en) 2018-08-02 2018-08-02 Motion prediction method based on shape matching

Country Status (1)

Country Link
CN (1) CN109191496B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101169693B (en) * 2007-11-30 2010-06-09 埃派克森微电子有限公司 Optical movement sensing method
CN101996325A (en) * 2010-09-08 2011-03-30 北京航空航天大学 Improved method for extracting characteristic point from image
CN102473307A (en) * 2010-03-15 2012-05-23 松下电器产业株式会社 Method and apparatus for trajectory estimation, and method for segmentation
CN102598057A (en) * 2009-08-23 2012-07-18 Iad信息自动化及数据处理有限公司 Method and system for automatic object detection and subsequent object tracking in accordance with the object shape
CN102750708A (en) * 2012-05-11 2012-10-24 天津大学 Affine motion target tracing algorithm based on fast robust feature matching
CN103824076A (en) * 2014-02-28 2014-05-28 苏州大学 Detecting and extracting method and system characterized by image dimension not transforming
CN106056664A (en) * 2016-05-23 2016-10-26 武汉盈力科技有限公司 Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision
CN106463060A (en) * 2014-05-19 2017-02-22 株式会社理光 Processing apparatus, processing system, processing program, and processing method
CN107564035A (en) * 2017-07-31 2018-01-09 华南农业大学 The video tracing method for being identified and being matched based on important area

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2507558A (en) * 2012-11-05 2014-05-07 Toshiba Res Europ Ltd Image processing with similarity measure of two image patches
KR101783990B1 (en) * 2012-12-21 2017-10-10 한화테크윈 주식회사 Digital image processing apparatus and, method for estimating global motion of image

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101169693B (en) * 2007-11-30 2010-06-09 埃派克森微电子有限公司 Optical movement sensing method
CN102598057A (en) * 2009-08-23 2012-07-18 Iad信息自动化及数据处理有限公司 Method and system for automatic object detection and subsequent object tracking in accordance with the object shape
CN102473307A (en) * 2010-03-15 2012-05-23 松下电器产业株式会社 Method and apparatus for trajectory estimation, and method for segmentation
CN101996325A (en) * 2010-09-08 2011-03-30 北京航空航天大学 Improved method for extracting characteristic point from image
CN102750708A (en) * 2012-05-11 2012-10-24 天津大学 Affine motion target tracing algorithm based on fast robust feature matching
CN103824076A (en) * 2014-02-28 2014-05-28 苏州大学 Detecting and extracting method and system characterized by image dimension not transforming
CN106463060A (en) * 2014-05-19 2017-02-22 株式会社理光 Processing apparatus, processing system, processing program, and processing method
CN106056664A (en) * 2016-05-23 2016-10-26 武汉盈力科技有限公司 Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision
CN107564035A (en) * 2017-07-31 2018-01-09 华南农业大学 The video tracing method for being identified and being matched based on important area

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Direct Techniques for Optimal Sub-Pixel Motion Accuracy Estimation and Position Prediction;Qi Zhang 等;《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》;20101231;第20卷(第12期);1735-1744 *
Hierarchical active shape model with motion prediction for real-time tracking of non-rigid objects;S.-W. Lee 等;《The Institution of Engineering and Technology 2007》;20071231;第1卷(第1期);17-24 *
基于运动方向预测的快速运动估计算法;向友君 等;《计算机工程》;20091231;第35卷(第24期);20-22 *
基于鲁棒曲线匹配的星表特征跟踪方法;邵巍 等;《深空探测学报》;20140331;第1卷(第1期);75-80 *

Also Published As

Publication number Publication date
CN109191496A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
Kümmerer et al. DeepGaze II: Reading fixations from deep features trained on object recognition
Cuevas et al. A novel multi-threshold segmentation approach based on differential evolution optimization
Yang et al. Fast vehicle logo detection in complex scenes
CN111767882A (en) Multi-mode pedestrian detection method based on improved YOLO model
CN106874942B (en) Regular expression semantic-based target model rapid construction method
CN110414616B (en) Remote sensing image dictionary learning and classifying method utilizing spatial relationship
CN114742799B (en) Industrial scene unknown type defect segmentation method based on self-supervision heterogeneous network
CN110084201B (en) Human body action recognition method based on convolutional neural network of specific target tracking in monitoring scene
Wang et al. Fast infrared maritime target detection: Binarization via histogram curve transformation
CN110706235A (en) Far infrared pedestrian detection method based on two-stage cascade segmentation
CN111507337A (en) License plate recognition method based on hybrid neural network
CN107609571A (en) A kind of adaptive target tracking method based on LARK features
CN108256518B (en) Character area detection method and device
CN111861866A (en) Panoramic reconstruction method for substation equipment inspection image
CN115033721A (en) Image retrieval method based on big data
CN110728238A (en) Personnel re-detection method of fusion type neural network
CN104268845A (en) Self-adaptive double local reinforcement method of extreme-value temperature difference short wave infrared image
Qiu et al. The infrared moving target extraction and fast video reconstruction algorithm
CN112070116B (en) Automatic artistic drawing classification system and method based on support vector machine
CN112101283A (en) Intelligent identification method and system for traffic signs
CN109191496B (en) Motion prediction method based on shape matching
CN110766003A (en) Detection method of fragment and link scene characters based on convolutional neural network
CN116563306A (en) Self-adaptive fire trace spectrum image segmentation method and system
CN110570450A (en) Target tracking method based on cascade context-aware framework
CN113538387B (en) Multi-scale inspection image identification method and device based on deep convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A motion prediction method based on shape matching

Effective date of registration: 20210825

Granted publication date: 20201002

Pledgee: Zhongguancun Beijing technology financing Company limited by guarantee

Pledgor: ALVA SYSTEMS

Registration number: Y2021990000769

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20230508

Granted publication date: 20201002

Pledgee: Zhongguancun Beijing technology financing Company limited by guarantee

Pledgor: ALVA SYSTEMS

Registration number: Y2021990000769

PC01 Cancellation of the registration of the contract for pledge of patent right