CN104616302A - Real-time object identification method - Google Patents
Real-time object identification method Download PDFInfo
- Publication number
- CN104616302A CN104616302A CN201510058136.2A CN201510058136A CN104616302A CN 104616302 A CN104616302 A CN 104616302A CN 201510058136 A CN201510058136 A CN 201510058136A CN 104616302 A CN104616302 A CN 104616302A
- Authority
- CN
- China
- Prior art keywords
- point
- image
- sample
- target
- match
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
- G06T7/0014—Biomedical image inspection using an image reference approach
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a real-time object identification method. The real-time object identification method includes that detecting feature points of an image captured by a medical terminal through a corner detection method based on scale space; using a dynamic self-adaptive random measuring matrix to extract a scale-invariant compression feature vector of each scale sample to describe a multi-scale target feature point; building a multi-stage index structure of a description operator, and finishing the quick search for similar description operators. The real-time object identification method is fast in identification speed, less in memory occupancy, high in recognition precision and suitable for the visual search of a mobile platform.
Description
Technical field
The present invention relates to image recognition, particularly a kind of medical target real-time identification method.
Background technology
Image recognition is one of most active field in computer vision field, and main goal in research is the powerful arithmetic capability utilizing computing machine, helps the mankind automatically to process magnanimity physical message, identifies that the target of various different mode replaces the part brainwork of people.Image recognition combines a lot of subject content, comprising Computer Science and Technology, physics, statistics and Neurobiology etc., is widely used in multiple fields such as geologic prospecting, image remote sensing, robot vision, biomedicine.First image recognition carries out feature point detection to target image, and then is described unique point, determines image the most similar in the corresponding Sample Storehouse of target image finally by Feature Points Matching.Image recognition technology also has a lot of ripe application case on personal computer and embedded type terminal equipment, and along with the continuous enhancing of termination function, this technology is also applied to intelligent medical treatment terminal gradually.But the recognition methods of limitation to calculation of complex of the weak processing power of intelligent medical treatment terminal itself and low internal memory proposes test.Relative to the server being equipped with now high speed processing chip and massive store chip, intelligent medical treatment terminal also Shortcomings: lower image resolution ratio, lack special graphics accelerator, there is no Float Point Unit, especially low side processor and low capacity storer etc., limit and carry out complex calculation and large-scale calculations on intelligent terminal.If rely on client-server (C/S) model, only obtain image and display result in medical terminal, and held at PC the complicated approach that the calculated amount such as feature point extraction, coupling is large, undertaken alternately by radio communication, be vulnerable to the impact of wireless network, when network congestion, effect is poor.
Summary of the invention
For solving the problem existing for above-mentioned prior art, the present invention proposes a kind of target real-time identification method, comprising:
The angular-point detection method based on metric space is used to detect the unique point of medical terminal captured images;
Utilize dynamic self-adapting random measurement matrix to extract the Scale invariant compressive features vector of each yardstick sample, describe multiscale target unique point with this;
Set up the multilevel index structure describing operator, complete the fast search of similar description operator.
Preferably, the unique point of described detection medical terminal captured images, comprises further:
First in the discrete metric space of image, every one deck searches for the angle point of high-lighting, uses pixel intensity as Comparative indices, weighs the high-lighting of tested point, alternatively unique point; If the luminance difference of central point and neighborhood point is less than predefine gray threshold, then be defined as central point and neighborhood luminance is close, to include in candidate feature point, then to candidate feature point set, non-maxima suppression is carried out in the bilevel neighborhood that its metric space is adjacent, reject the point of non-maximum value, be retained in the neighborhood of metric space and there is unique unique unique point.
Preferably, utilize dynamic self-adapting random measurement matrix to extract the Scale invariant compressive features vector of each yardstick sample, describe multiscale target unique point with this, comprise further:
If start image target size is w × h, and under setting original state, the yardstick s of target is 1, then the original higher-dimension regularization rectangular characteristic vector u=(u that initial pictures is corresponding
1, u
2..., u
n)
t, dimension n=(wh)
2, construct initial random calculation matrix R0 ∈ R
m × nwherein m is compressive features vector dimension, R0 is determined by initial target size, remain unchanged in object tracking process at the R0 of the initial period structure identified, only store and calculate the nonzero element in stochastic matrix, described nonzero element comprises the top left corner apex coordinate of the regularization rectangular characteristic value respective rectangular corresponding to non-zero entry value, non-zero entry, width and elevation information;
Be the sample z of s to yardstick
s∈ R
sw × sh, the original regularization rectangular characteristic vector u ' of its correspondence=(u '
1, u '
2..., u '
n ')
t, dimension n ' and=(s
2wh)
2, in vectorial leaching process, first, according to sample size s, corresponding random measurement matrix R under yardstick s is obtained do conversion to R0
s∈ R
m × n 'if, certain non-zero entry r in R0
i, jcorresponding to rectangular area (c ', r ', w ', h '), c ' and r ' for rectangle top left corner apex coordinate, w ' and h ' be rectangular dimension, then R after conversion
sin corresponding non-zero entry value r
s i, j 'with r
i, jidentical, respective rectangular parameter becomes (c ' s, r ' s, w ' s, h ' s), utilize the stochastic matrix R after conversion
s, the m obtaining the target sample under corresponding scale ties up compressive features vector v=R
su ', the regularization rectangular characteristic value in the u ' utilizing integrogram only to calculate corresponding to non-zero entry, obtains each regularization rectangular characteristic value by means of only a multiplication and 3 sub-addition computings.
Preferably, the method also comprises:
After the description operator coupling completing query image, judge that threshold value that whether proportion that the quantity of similar features point accounts for sample image and query image itself is greater than setting determines whether searching the sample of successful match,
Calculate the match point quantity of all feature vector sets in feature vector set and the sample image storehouse of query image, selecting the image that match point quantity is maximum, and test matching rate, is the image that the match is successful when matching rate is greater than setting threshold value, that is:
If matching rate threshold value is M
th, the unique point quantity of query image A and the maximum sample image B of selected match point quantity is respectively k
1and k
2, match point quantity is m, if meet m/k
1>M
thand m/k
2>M
th, then the match is successful for A and B two images, and this sample image B is corresponding coupling target.
The present invention compared to existing technology, has the following advantages:
The present invention proposes a kind of target real-time identification method being applicable to mobile terminal, recognition speed is fast, and the occupancy of internal memory is few, and recognition accuracy is also higher, is applicable to the visual search of mobile platform.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the target real-time identification method according to the embodiment of the present invention.
Embodiment
Detailed description to one or more embodiment of the present invention is hereafter provided together with the accompanying drawing of the diagram principle of the invention.Describe the present invention in conjunction with such embodiment, but the invention is not restricted to any embodiment.Scope of the present invention is only defined by the claims, and the present invention contain many substitute, amendment and equivalent.Set forth many details in the following description to provide thorough understanding of the present invention.These details are provided for exemplary purposes, and also can realize the present invention according to claims without some in these details or all details.
Fig. 1 is the target real-time identification method process flow diagram according to the embodiment of the present invention.The present invention proposes a kind of recognition methods being applicable to intelligent medical treatment terminal, describe based on the angular-point detection method of metric space and SURF processing power and the memory size problem that operator solves intelligent medical treatment terminal itself by using, then reduce image inside similar features mismatch problems by the feature angle point in band center of gravity direction; Finally, set up and describe operator multilevel index structure, complete the fast search of similar description operator.
The development of current intelligent medical treatment terminal is very fast, and integrated high-speed process chip, mass storage, built-in camera, be even also integrated with large quantity sensor as GPS alignment sensor and acceleration transducer etc. in some high-end equipment.The hardware device that these performances improve constantly, brings new development space to computer vision and image processing field, many computer vision methods originally operated on PC, by improving and optimizing, on the platform of the intelligent medical treatment terminal that can effectively run.The feature point detection of target real-time identification method and unique point describing method are considered from calculated amount and memory headroom two aspect respectively, and method is met, and calculated amount is little, take up room the characteristic such as low, is applicable to intelligent medical treatment terminal.
Unique point is exactly change the most violent part in image local area, and the present invention adopts extraction rate fast, and the feature point detecting method that calculated amount is little has taken into account the real-time of extracting method while being applicable to intelligent medical treatment terminal platform.Feature point detecting method of the present invention is realized by the angle point of non-maxima suppression in searching image metric space.First at the angle point of image each scale layer search high-lighting, alternatively unique point; Again to candidate feature point set, carry out non-maxima suppression, reject the point of some non-maximum value in the bilevel neighborhood that its metric space is adjacent, remaining has unique unique unique point exactly in the neighborhood of metric space.
Every one deck in discrete metric space searches angle point, uses pixel intensity as Comparative indices, weighs the high-lighting of tested point.If the luminance difference of central point and neighborhood point is less than predefine gray threshold, be then defined as central point and neighborhood luminance is close, namely do not meet significantly bright or dark condition.The selection of neighborhood then will be determined according to application platform and demand, the conventional template having 5-8,8-12,9-16.In the neighborhood of tested point with 16 somes composition of layer, namely have at least the gray scale of 9 neighborhood points be entirely greater than or be entirely less than the brightness of central point to be measured, just think that central point has high-lighting in neighborhood, will include in candidate feature point.
By all angle points searched alternatively unique point in metric space, then at yardstick adjacent bed, non-maxima suppression is implemented to all candidate feature points.Finally be confirmed to be the pixel P of unique point
σ, the some P that it is corresponding in the adjacent bed up and down of metric space
σ-1and P
σ+1its respective scale layer also should have high-lighting.
Numerous sensor in intelligent medical treatment terminal is extract more to have the image characteristic point of separating capacity to provide condition, utilizes center of gravity to a certain degree can solve the above problems.After the alignment of center of gravity direction, the difference between the regularization region of local feature is more obvious.So, center of gravity direction is added in the description of unique point, significantly solves the matching problem of physical feature.
Center of gravity direction in camera coordinate system is projected the plane of delineation according to camera imaging model, obtain the center of gravity direction of unique point, to unique point along the alignment of center of gravity direction, calculate angle between center of gravity direction and the principal direction of unique point neighborhood territory pixel brightness, it can be used as a description of unique point.During Feature Points Matching with center of gravity direction, first carry out angle similarity determination, set a threshold value, judge that the absolute value of angle difference is less than this threshold value; Carry out traditional bee-line search again.It overcomes the shortcoming that traditional specified point detection method is difficult to for the unique point of analog structure distinguish to a certain extent, also saves and describes operator match time, performance is better than traditional method.
Common intelligent medical treatment terminal is with acceleration transducer, attitude sensor, and the sensing equipment such as gyroscope.Acceleration transducer provides center of gravity direction for each image graph picture, with the vector representation of three-dimensional, as g=[g
x, g
y, g
z]
t, usually in camera coordinate system, regularization ‖ g ‖=1 is carried out to g, and center of gravity vector is projected on the two dimensional surface of image place, calculate image slices vegetarian refreshments p center of gravity directional image mid point p=[u, v, 1] of projection plane
tthe center of gravity direction vector at place is d=p'-p,
Wherein d=[d
u, d
v, 0]
tfor the center of gravity direction at a p place, p' is calculated by following formula:
p'=[w
u',w
v',w]T=p+Kg
Wherein K=is the internal reference matrix of terminal camera.D also can be tried to achieve by the approximation method that a kind of calculated amount is less:
d=[g
x(p
u-u)+f
ug
x,g
x(p
v-v)+f
vg
y,0]
T
The angle that can be calculated pixel center of gravity direction in image by d is
θ
g=arctan(d
v/d
u)
Wherein [p
u, p
v]
tprincipal point coordinate, f
u, f
vit is the focal length of level and vertical direction.
In still another embodiment of the process, in order to realize real-time modeling method, and adapting to the change of target scale, utilizing dynamic self-adapting random measurement matrix to extract the Scale invariant compressive features vector of each yardstick sample, to describe multiscale target unique point.
Suppose in start image, target size is w × h, and under regulation original state, the yardstick s of target is 1.The original higher-dimension regularization rectangular characteristic vector u=(u that initial pictures is corresponding
1, u
2..., u
n)
t, dimension n=(wh)
2.Construct initial random calculation matrix R0 ∈ R
m × n, m is compressive features vector dimension.R0 is determined by initial target size, and the initial period structure identified, remains unchanged in tracing process.Because R0 ten points is sparse, therefore only need the nonzero element (comprising the top left corner apex coordinate of the regularization rectangular characteristic value respective rectangular corresponding to non-zero entry value, non-zero entry, width and elevation information) storing and calculate in stochastic matrix, reduce the Time & Space Complexity of algorithm.
Be the sample z of s to yardstick
s∈ R
sw × sh, the original regularization rectangular characteristic vector u ' of its correspondence=(u '
1, u '
2..., u '
n ') T, dimension n '=(s
2wh)
2.First, according to sample size s, corresponding random measurement matrix R under yardstick s is obtained do conversion to R0
s∈ R
m × n '.Suppose certain non-zero entry r in R0
i, jcorresponding to rectangular area (c ', r ', w ', h '), c ' and r ' for rectangle top left corner apex coordinate, w ' and h ' be rectangular dimension.So, R after conversion
sin corresponding non-zero entry value r
s i, j 'with r
i, jidentical, but respective rectangular parameter becomes, and (c ' s, r ' s, w ' s, h ' are s).Utilize the stochastic matrix R after conversion
s, the m obtaining the target sample under corresponding scale ties up compressive features vector v=R
su ', due to R
ssparse equally, therefore only need utilize the regularization rectangular characteristic value in the u ' of integrogram calculating corresponding to non-zero entry.When s gets n/4, can find out, random measurement matrix R0 and R
sin often row have at most c non-zero entry (c≤4), in actual realization, the computation complexity extracting certain sample characteristics is only O (cm), and utilizes integrogram method to make each regularization rectangular characteristic value only need a multiplication and 3 sub-addition computings to obtain.
Alternatively, in the description of image characteristic point, the present invention selects that computing velocity is faster, internal memory load is lower, and the better SURF of robustness describes operator.SURF is a kind of two valued description operator, adopts fixing sampling pattern, and the sampling point density placed in the closer neighborhoods centered by unique point is comparatively large, and the gaussian kernel parameter of each sampled point is less; And from the fringe region away from unique point, the density of sampled point is little and gaussian kernel parameter is larger.This on ordinary days experience with us in the fuzzyyer subjective experience of distance central region image far away conform to.
The calculating of SURF feature principal direction uses partial gradient to carry out representation feature principal direction, right the choosing of sampled point of compute gradient, uses simply relative to several groups of sampled points pair of central point.
The gradient of feature principal direction
Wherein I (P
j, σ
j) and I (P
i, σ
i) representing the brightness of a pair sampled point after Gaussian smoothing respectively, C is used to the sampled point of compute gradient to general collection, and M is sampled point logarithm in C, P
jand P
iit is the volume coordinate vector of a pair sampled point.
After obtaining principal direction, sampling pattern point set is rotated along unique point K
θ
g=arctan
2(g
y, g
x), start to build description operator D:
(P
ei, P
ej) for unique point K is after principal direction alignment, a pair sampled point in neighborhood, N is descriptor length.And above-mentioned T (P
ei, P
ej) meet:
I (P
ei)-I (P
ej) >0 time, T (P
ei, P
ej)=1,
I (P
ei)-I (P
ejduring)≤0, T (P
ei, P
ej)=0
Specified point describing method of the present invention is put screening employing by the standard that comparison point population variance is maximum, and the description operator length of final determination is shorter.
Traditional description operator matching process is all description operator and all pattern representation operators are compared, and determines similar description operator.When sample image is more, description operator linear growth match time of classic method, the image identification system higher to requirement of real-time is unacceptable.The present invention also comprises a kind of method for fast searching of similar description operator, only describes operator with rating unit.
First, describing operator cutting to the SURF of 512 is that multiple segmentation of mutually disjointing describes operator, and each sector boss is 8,64 segmentations altogether; Again operator secondary level rope is described to segmentation.One-level index is fragment number, secondary index is that the binary vector (00000000 ~ 11111111) of 8 first carries out segmentation to each pattern representation, the scale-of-two index that each segmentation describes operator corresponding according to fragment number and segmentation finds corresponding doubly linked list, and description operator mark and place image identification are generated a node city in chained list.But, in order to save storage space and facilitate follow-up carrying out and operation, the description operator of all sample images is stored in an array by the present invention, image correspondence describes the beginning of operator in array and terminates sequence number and binds with this image, is inserted in doubly linked list by element numbers in array as describing No. ID, operator.Not only can find description operator fast by sequence number like this, and corresponding image can be located.
According to above thought, optimizing the doubly linked list of original secondary index, is cluster centre with index value, meets Hamming distance distance values and is less than or equal to R
th/ n is inserted in doubly linked list.When realizing the search of similar description operator, first query specification operator is carried out segmentation, find the doubly linked list that each segmentation is corresponding, then the ID set in chained list is carried out and operation.If segmentation 1, segmentation 2 ..., segmentation n, the set of corresponding doubly linked list is D
1, D
2..., D
n, U is that segmentation describes operator corresponding ID intersection of sets collection.
U=D
1∪D
2∪…∪D
n
Finally, compared one by one by description operator corresponding with ID in U for query specification operator, finding out the pattern representation operator meeting following formula is similar description operator.
(| α
i-β
j| < θ
th) and (r
min<R
th)
α
i, β
jbe respectively the angle of query specification and pattern representation operator center of gravity direction and pixel intensity principal direction, θ
thfor angle threshold value, r
minfor pattern representation operator in U and the nearest distance of query specification operator, R
thfor Hamming distance threshold value.
After the description operator coupling completing query image, need to judge that threshold value that whether proportion that the quantity of similar features point accounts for sample image and query image itself is greater than setting determines whether searching the sample of successful match.
Calculate the match point quantity of all feature vector sets in feature vector set and the sample image storehouse of query image A, the image B that selected match point quantity is maximum, testing matching rate, is the image that the match is successful when matching rate is greater than setting threshold value.
If matching rate threshold value is M
th, the unique point quantity of A and B is k
1and k
2, match point quantity is m.If meet m/k
1>M
thand m/k
2>M
th, then illustrate that the matching degree of two images is higher, the match is successful.This sample image is corresponding coupling target.R
thand M
thdifferent values can cause the difference of recognition effect, experimental data will be provided at experimental section to determine best valued combinations.
In sum, the present invention proposes a kind of target real-time identification method being applicable to mobile terminal, recognition speed is fast, and the occupancy of internal memory is few, and recognition accuracy is also higher, is applicable to the visual search of mobile platform.
Obviously, it should be appreciated by those skilled in the art, above-mentioned of the present invention each module or each step can realize with general computing system, they can concentrate on single computing system, or be distributed on network that multiple computing system forms, alternatively, they can realize with the executable program code of computing system, thus, they can be stored and be performed by computing system within the storage system.Like this, the present invention is not restricted to any specific hardware and software combination.
Should be understood that, above-mentioned embodiment of the present invention only for exemplary illustration or explain principle of the present invention, and is not construed as limiting the invention.Therefore, any amendment made when without departing from the spirit and scope of the present invention, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.In addition, claims of the present invention be intended to contain fall into claims scope and border or this scope and border equivalents in whole change and modification.
Claims (4)
1. a target real-time identification method, is characterized in that, comprising:
The angular-point detection method based on metric space is used to detect the unique point of medical terminal captured images;
Utilize dynamic self-adapting random measurement matrix to extract the Scale invariant compressive features vector of each yardstick sample, describe multiscale target unique point with this;
Set up the multilevel index structure describing operator, complete the fast search of similar description operator.
2. method according to claim 1, is characterized in that, the unique point of described detection medical terminal captured images, comprises further:
First in the discrete metric space of image, every one deck searches for the angle point of high-lighting, uses pixel intensity as Comparative indices, weighs the high-lighting of tested point, alternatively unique point; If the luminance difference of central point and neighborhood point is less than predefine gray threshold, then be defined as central point and neighborhood luminance is close, to include in candidate feature point, then to candidate feature point set, non-maxima suppression is carried out in the bilevel neighborhood that its metric space is adjacent, reject the point of non-maximum value, be retained in the neighborhood of metric space and there is unique unique unique point.
3. method according to claim 2, is characterized in that, utilizes dynamic self-adapting random measurement matrix to extract the Scale invariant compressive features vector of each yardstick sample, describes multiscale target unique point, comprise further with this:
If start image target size is w × h, and under setting original state, the yardstick s of target is 1, then the original higher-dimension regularization rectangular characteristic vector u=(u that initial pictures is corresponding
1, u
2..., u
n)
t, dimension n=(wh)
2, construct initial random calculation matrix R0 ∈ R
m × nwherein m is compressive features vector dimension, R0 is determined by initial target size, remain unchanged in object tracking process at the R0 of the initial period structure identified, only store and calculate the nonzero element in stochastic matrix, described nonzero element comprises the top left corner apex coordinate of the regularization rectangular characteristic value respective rectangular corresponding to non-zero entry value, non-zero entry, width and elevation information;
Be the sample z of s to yardstick
s∈ R
sw × sh, the original regularization rectangular characteristic vector u ' of its correspondence=(u '
1, u '
2..., u '
n ')
t, dimension n ' and=(s
2wh)
2, in vectorial leaching process, first, according to sample size s, corresponding random measurement matrix R under yardstick s is obtained do conversion to R0
s∈ R
m × n 'if, certain non-zero entry r in R0
i, jcorresponding to rectangular area (c ', r ', w ', h '), c ' and r ' for rectangle top left corner apex coordinate, w ' and h ' be rectangular dimension, then R after conversion
sin corresponding non-zero entry value r
s i, j 'with r
i, jidentical, respective rectangular parameter becomes (c ' s, r ' s, w ' s, h ' s), utilize the stochastic matrix R after conversion
s, the m obtaining the target sample under corresponding scale ties up compressive features vector v=R
su ', the regularization rectangular characteristic value in the u ' utilizing integrogram only to calculate corresponding to non-zero entry, obtains each regularization rectangular characteristic value by means of only a multiplication and 3 sub-addition computings.
4. method according to claim 3, is characterized in that, the method also comprises:
After the description operator coupling completing query image, judge that threshold value that whether proportion that the quantity of similar features point accounts for sample image and query image itself is greater than setting determines whether searching the sample of successful match,
Calculate the match point quantity of all feature vector sets in feature vector set and the sample image storehouse of query image, selecting the image that match point quantity is maximum, and test matching rate, is the image that the match is successful when matching rate is greater than setting threshold value, that is:
If matching rate threshold value is M
th, the unique point quantity of query image A and the maximum sample image B of selected match point quantity is respectively k
1and k
2, match point quantity is m, if meet m/k
1>M
thand m/k
2>M
th, then the match is successful for A and B two images, and this sample image B is corresponding coupling target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510058136.2A CN104616302A (en) | 2015-02-04 | 2015-02-04 | Real-time object identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510058136.2A CN104616302A (en) | 2015-02-04 | 2015-02-04 | Real-time object identification method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104616302A true CN104616302A (en) | 2015-05-13 |
Family
ID=53150735
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510058136.2A Pending CN104616302A (en) | 2015-02-04 | 2015-02-04 | Real-time object identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104616302A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105096345A (en) * | 2015-09-15 | 2015-11-25 | 电子科技大学 | Target tracking method based on dynamic measurement matrix and target tracking system based on dynamic measurement matrix |
CN106483577A (en) * | 2015-09-01 | 2017-03-08 | 中国航天科工集团第四研究院指挥自动化技术研发与应用中心 | A kind of optical detecting gear |
CN111314708A (en) * | 2020-02-25 | 2020-06-19 | 腾讯科技(深圳)有限公司 | Image data compression method and device, storage medium and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103065150A (en) * | 2011-10-24 | 2013-04-24 | 康佳集团股份有限公司 | Scene recognition method based on intelligent mobile terminal |
CN104268602A (en) * | 2014-10-14 | 2015-01-07 | 大连理工大学 | Shielded workpiece identifying method and device based on binary system feature matching |
US20150023602A1 (en) * | 2013-07-19 | 2015-01-22 | Kamil Wnuk | Fast recognition algorithm processing, systems and methods |
-
2015
- 2015-02-04 CN CN201510058136.2A patent/CN104616302A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103065150A (en) * | 2011-10-24 | 2013-04-24 | 康佳集团股份有限公司 | Scene recognition method based on intelligent mobile terminal |
US20150023602A1 (en) * | 2013-07-19 | 2015-01-22 | Kamil Wnuk | Fast recognition algorithm processing, systems and methods |
CN104268602A (en) * | 2014-10-14 | 2015-01-07 | 大连理工大学 | Shielded workpiece identifying method and device based on binary system feature matching |
Non-Patent Citations (2)
Title |
---|
孙继平 等: "矿井视频图像中人员目标匹配与跟踪方法(优先出版)", 《中国矿业大学学报》 * |
桂振文 等: "一种适用于智能手机的图像识别算法", 《电子学报》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106483577A (en) * | 2015-09-01 | 2017-03-08 | 中国航天科工集团第四研究院指挥自动化技术研发与应用中心 | A kind of optical detecting gear |
CN105096345A (en) * | 2015-09-15 | 2015-11-25 | 电子科技大学 | Target tracking method based on dynamic measurement matrix and target tracking system based on dynamic measurement matrix |
CN111314708A (en) * | 2020-02-25 | 2020-06-19 | 腾讯科技(深圳)有限公司 | Image data compression method and device, storage medium and electronic equipment |
CN111314708B (en) * | 2020-02-25 | 2021-05-07 | 腾讯科技(深圳)有限公司 | Image data compression method and device, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xie et al. | Multilevel cloud detection in remote sensing images based on deep learning | |
CN110738125B (en) | Method, device and storage medium for selecting detection frame by Mask R-CNN | |
US9349189B2 (en) | Occlusion resistant image template matching using distance transform | |
JP6041923B2 (en) | Information processing method and information processing apparatus | |
US20140169639A1 (en) | Image Detection Method and Device | |
WO2018090937A1 (en) | Image processing method, terminal and storage medium | |
CN105354578B (en) | A kind of multiple target object image matching method | |
GB2532948A (en) | Objection recognition in a 3D scene | |
CN103578093B (en) | Method for registering images, device and augmented reality system | |
CN104063711B (en) | A kind of corridor end point fast algorithm of detecting based on K means methods | |
CN104200240A (en) | Sketch retrieval method based on content adaptive Hash encoding | |
JP2013025799A (en) | Image search method, system, and program | |
CN112163588A (en) | Intelligent evolution-based heterogeneous image target detection method, storage medium and equipment | |
CN110910445B (en) | Object size detection method, device, detection equipment and storage medium | |
Limper et al. | Mesh Saliency Analysis via Local Curvature Entropy. | |
Wang et al. | Combining semantic scene priors and haze removal for single image depth estimation | |
CN106033613B (en) | Method for tracking target and device | |
CN116704324A (en) | Target detection method, system, equipment and storage medium based on underwater image | |
CN104616302A (en) | Real-time object identification method | |
CN107452003A (en) | A kind of method and device of the image segmentation containing depth information | |
CN113496260B (en) | Grain depot personnel non-standard operation detection method based on improved YOLOv3 algorithm | |
CN105631849B (en) | The change detecting method and device of target polygon | |
CN113704276A (en) | Map updating method and device, electronic equipment and computer readable storage medium | |
CN113378837A (en) | License plate shielding identification method and device, electronic equipment and storage medium | |
CN104615988A (en) | Picture identification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20150513 |
|
RJ01 | Rejection of invention patent application after publication |