CN101950352A - Target detection method capable of removing illumination influence and device thereof - Google Patents

Target detection method capable of removing illumination influence and device thereof Download PDF

Info

Publication number
CN101950352A
CN101950352A CN2010101951491A CN201010195149A CN101950352A CN 101950352 A CN101950352 A CN 101950352A CN 2010101951491 A CN2010101951491 A CN 2010101951491A CN 201010195149 A CN201010195149 A CN 201010195149A CN 101950352 A CN101950352 A CN 101950352A
Authority
CN
China
Prior art keywords
gradient
pixel
prospect
background image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010101951491A
Other languages
Chinese (zh)
Other versions
CN101950352B (en
Inventor
杨学超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netposa Technologies Ltd
Original Assignee
Beijing Zanb Science & Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zanb Science & Technology Co Ltd filed Critical Beijing Zanb Science & Technology Co Ltd
Priority to CN2010101951491A priority Critical patent/CN101950352B/en
Publication of CN101950352A publication Critical patent/CN101950352A/en
Application granted granted Critical
Publication of CN101950352B publication Critical patent/CN101950352B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a target detection method capable of removing illumination influence, including the following steps: a background image is created; the gradients of the current frame image and the background image are calculated and output, wherein the gradient comprises a horizontal gradient and a vertical gradient; the directions and amplitudes of the gradients of the current frame image and the background image are compared, and a foreground outline is extracted and output according to the comparison result; and the extracted foreground outline is filled, so as to obtain a foreground block, and noise is filtered, so as to output the target. The invention also provides a target detection device capable of removing illumination influence. Compared with the prior art, the target detection method and device of the invention effectively solve the problems that the detected target is not accurate and is not reliable due to illumination influence in target detection.

Description

A kind of object detection method and device of removing illumination effect
Technical field
The present invention relates to a kind of object detection method and device, particularly a kind of object detection method and device of removing illumination effect belongs to Flame Image Process, field of video monitoring.
Background technology
Moving object detection is the basis of intelligent video monitoring technology, and its testing result directly affects alert rate of mistake and the false alarm rate that later stage incident (such as: invasion, article leave over, article are stolen, vehicle reverse driving etc.) detects, and has therefore obtained paying close attention to widely.Yet when practical application, the situation of illumination variation occurs through regular meeting, this has just influenced the accuracy and the reliability of moving object detection greatly.Therefore, need research to remove the object detection method of illumination effect.
The object detection method of the removal illumination effect of research mainly contains two classes at present.Wherein, class methods are based on the method for pixel.In general illumination variation only can be brought the variation of pixel intensity and color does not have too big variation, and these class methods are analyzed with the identification illumination variation pixel value in the HSI space based on this principle.But do not satisfy this hypothesis prerequisite in a lot of situations in true environment the inside, and be that background or target all do not have colouring information inside most outdoor scene, this class algorithm is unsatisfactory at the effect of actual environment the inside like this.Another kind of method is based on the method in zone.If all have certain contrast at the illumination variation front and back scene, the variation of illumination can not bring the variation of image texture edge feature so, method based on the zone is utilized this principle just, if the edge of prospect and background is complementary, then this foreground area is exactly the false foreground area that illumination variation causes.But the hypothesis of " the illumination variation front and back scene all has certain contrast " is false in night, and then this class algorithm lost efficacy.In addition, there is real goal to enter in the illumination variation zone simultaneously and also can causes this class algorithmic match failure.
Publication number is the method that the Chinese patent application of CN 101393603A discloses a kind of identification and detection tunnel fire disaster flame.This application provides uses the method for gamma transformation to reject unnecessary illumination.But this method computing complexity and reliability are not high.
In sum, press at present and propose simply a kind of and remove the object detection method of illumination effect effectively.
Summary of the invention
The objective of the invention is to solve the inaccurate and unreliable problem of the detection target that in target detection, produces owing to illumination effect.In order to achieve the above object, the invention provides a kind of object detection method of removing illumination effect, described object detection method may further comprise the steps:
Step 101: set up background image;
Step 102: calculate and the gradient of output current frame image and the gradient of described background image, described gradient comprises horizontal direction gradient and vertical gradient;
Step 103: the direction and the amplitude of the gradient of more described current frame image and the gradient of described background image, and extract and export prospect profile in view of the above; With
Step 104: the prospect profile that extracts is filled obtaining the prospect agglomerate, and filtering noise is with export target.
Preferably, in step 101, adopt statistical method to set up background image.Described statistical method is to (the x of the pixel in the image of gathering in certain section time t, y) carrying out statistical study (is exactly to pixel (x, y) gray-scale value carries out simple number statistics and adds up), select this pixel (x in this section period, y) stable gray-scale value is (with pixel (x, y) gray-scale value occurrence number is maximum elects stable gray-scale value as) as a setting in the image to should pixel (x, y) gray-scale value, by adding up in this section period t the stable gray-scale value of each pixel in the images acquired, thereby obtain background image.
Preferably, in step 102, adopt gradient operator to calculate horizontal direction gradient, the vertical gradient of described current frame image respectively, and horizontal direction gradient, the vertical gradient of described background image.Described gradient operator is Robert operator or Sobel operator.Robert operator and Sobel operator are the gradient algorithms in the general image treatment, repeat no more here.
Preferably, step 103 may further comprise the steps:
Step 1031: according to the gradient magnitude A2 and the gradient direction θ 2 of each pixel in the horizontal direction gradient of the horizontal direction gradient of the described current frame image of step 102 output and vertical gradient and described background image and gradient magnitude A1 that vertical gradient is calculated each pixel in the described current frame image and gradient direction θ 1 and the described background image;
Step 1032: if the pixel in the described current frame image (x, this pixel in gradient magnitude A1 y) and the described background image (x, gradient magnitude A2 y) all 〉=first threshold T1, then change step 1033 over to; If gradient magnitude A1 and A2 all≤the second threshold value T2, then think this pixel (x y) be noise spot, otherwise calculating | A1-A2|; If | A1-A2| 〉=the 3rd threshold value T3, think that then (x y) belongs to the foreground point to this pixel; Wherein, first threshold T1 ∈ [8,12], the second threshold value T2 ∈ [3,5], the 3rd threshold value T3 ∈ [4,6];
Step 1033: calculate this pixel (x in the described current frame image, y) gradient direction θ 1 and interior this pixel (x of described background image, y) absolute difference of gradient direction θ 2 | θ 1-θ 2|, if | θ 1-θ 2| 〉=the 4th threshold value T4, think that then (x y) belongs to the foreground point to this pixel; Wherein, the 4th threshold value T4 ∈ [18 °, 22 °]; With
Step 1034: extract the pixel that all belong to the foreground point, thereby obtain prospect profile.
Preferably, step 104 may further comprise the steps:
Step 1041: the prospect profile to step 103 output is filled to obtain the prospect agglomerate;
Step 1042: calculate the error image of current frame image and background image, adopt thresholding method that this error image is carried out Threshold Segmentation to obtain the variation prospect in the error image; With
Step 1043: described prospect agglomerate and described variation prospect are carried out AND-operation, with the pixel that belongs to described prospect agglomerate and described variation prospect simultaneously as impact point to obtain and export target.
In addition, the present invention also provides a kind of object detecting device of removing illumination effect, and described device comprises: background is set up the unit, is used to adopt statistical method to set up background image; The gradient calculation unit is used to calculate and export the gradient of current frame image and the gradient of background image, and described gradient comprises horizontal direction gradient and vertical gradient; The prospect profile extraction unit is used for the direction and the amplitude of the gradient of the gradient of more described current frame image and described background image, and extracts and export prospect profile in view of the above; With the target acquiring unit, be used for the prospect profile that extracts is filled obtaining the prospect agglomerate, and filtering noise is with export target.
Compared with prior art, object detection method of the present invention and device can detect the target of removing illumination effect exactly, have solved the inaccurate and unreliable problem of the detection target that produces owing to illumination effect in the target detection effectively.
Description of drawings
Fig. 1 is the process flow diagram according to the object detection method of removal illumination effect of the present invention;
Fig. 2 is the process flow diagram according to the step 103 of object detection method of the present invention;
Fig. 3 is the process flow diagram according to the step 104 of object detection method of the present invention;
Fig. 4 is the frame diagram according to the object detecting device of removal illumination effect of the present invention.
Embodiment
For making your auditor can further understand structure of the present invention, feature and other purposes, now be described in detail as follows in conjunction with appended preferred embodiment, illustrated preferred embodiment only is used to technical scheme of the present invention is described, and non-limiting the present invention.
The object detection method of removal illumination effect provided by the present invention is mainly used in the inaccurate and insecure problem of target detection that produces owing to illumination effect in the monitoring scene that solves.
As shown in Figure 1, Fig. 1 is the process flow diagram according to the object detection method of removal illumination effect of the present invention.As seen from Figure 1, the object detection method of removal illumination effect of the present invention may further comprise the steps:
Step 101: set up background image;
Step 102: calculate and the gradient of output current frame image and the gradient of described background image, described gradient comprises horizontal direction gradient and vertical gradient;
Step 103: the direction and the amplitude of the gradient of more described current frame image and the gradient of described background image, and extract and export prospect profile in view of the above; With
Step 104: the prospect profile that extracts is filled obtaining the prospect agglomerate, and filtering noise is with export target.
Wherein, the background image set up of step 101 can be start frame image or specific still image.But in order to ensure the stability and the accuracy of background image, the statistical method of preferred employing is set up background image in step 101.This statistical method is implemented by following steps: to the pixel (x in the image of gathering in certain section time t, y) carrying out statistical study (is exactly to pixel (x, y) gray-scale value carries out simple number statistics and adds up), select this pixel (x in this section period, y) stable gray-scale value is (with pixel (x, y) gray-scale value occurrence number is maximum elects stable gray-scale value as) as a setting in the image to should pixel (x, y) gray-scale value, by adding up in this section period t the stable gray-scale value of each pixel in the images acquired, thereby obtain background image.
In step 102, can adopt gradient operator to calculate horizontal direction gradient, the vertical gradient of current frame image respectively, and horizontal direction gradient, the vertical gradient of background image.Wherein, this gradient operator is preferably Robert operator or Sobel operator.For example, can adopt 3 * 3Robert operator to calculate horizontal direction gradient, the vertical gradient of current frame image respectively, and horizontal direction gradient, the vertical gradient of calculating background image.
The horizontal direction gradient of 3 * 3Robert operator computed image, vertical gradient utilize 3 * 3Robert operator horizontal direction template, vertical direction template computed image interior pixel to put the horizontal difference of corresponding level, vertical formwork, vertical difference exactly.For example, can select 3 * 3Robert operator horizontal direction template to be:
Figure BSA00000147525800061
The vertical direction template is:
Figure BSA00000147525800062
Pixel (x, horizontal direction gradient S y) then H(x, y), vertical gradient S V(x y) is respectively:
S H(x,y)=(f x+1,y-1+2f x+1,y+f x+1,y+1)-(f x-1,y-1+2f x-1,y+f x-1,y+1)
S V(x,y)=(f x-1,y+1+2f x,y+1+f x+1,y+1)-(f x-1,y-1+2f x,y-1+f x+1,y-1)
f X, yRemarked pixel point (x, gray-scale value y).)
As shown in Figure 2, Fig. 2 is the process flow diagram according to the step 103 of object detection method of the present invention.As seen from Figure 2, the step 103 according to object detection method of the present invention can may further comprise the steps:
Step 1031: according to the gradient magnitude A2 and the gradient direction θ 2 of each pixel in the horizontal direction gradient of the horizontal direction gradient of the current frame image of step 102 output and vertical gradient and background image and gradient magnitude A1 that vertical gradient is calculated each pixel in the current frame image and gradient direction θ 1 and the background image.
Image interior pixel point (x, gradient magnitude y), gradient direction computing formula are as follows:
A ( x , y ) = S H 2 ( x , y ) + S V 2 ( x , y )
θ ( x , y ) = arctg S V ( x , y ) S H ( x , y )
Step 1032: if the pixel (x in the current frame image, y) this pixel (x in gradient magnitude A1 and the background image, y) gradient magnitude A2 then changes step 1033 over to all more than or equal to first threshold T1, if gradient magnitude A1 and A2 are all smaller or equal to the second threshold value T2, then think this pixel (x, y) be noise spot, otherwise calculate | A1-A2|, if | A1-A2| is more than or equal to the 3rd threshold value T3, think that then (x y) belongs to the foreground point to this pixel.Preferably, first threshold T1 ∈ [8,12], the second threshold value T2 ∈ [3,5], the 3rd threshold value T3 ∈ [4,6].
Step 1033: calculate this pixel (x in the current frame image, y) gradient direction θ 1 and interior this pixel (x of background image, y) absolute difference of gradient direction θ 2 | θ 1-θ 2|, if | θ 1-θ 2| is more than or equal to the 4th threshold value T4, think that then (x y) belongs to the foreground point to this pixel.Preferably, the 4th threshold value T4 ∈ [18 °, 22 °].
Step 1034: extract the pixel that all belong to the foreground point, thereby obtain prospect profile.
As shown in Figure 3, Fig. 3 is the process flow diagram according to the step 104 of object detection method of the present invention.As seen from Figure 3, the step 104 according to object detection method of the present invention can may further comprise the steps:
Step 1041: the prospect profile to step 103 output is filled to obtain the prospect agglomerate; The method that profile is filled is a lot, for example can adopt the horizontal direction scanning method, step can be as follows: the rectangular area with each prospect profile is an object, begin scanning from first row of rectangular area, by order from left to right, scan first point (being leftmost profile) and last point (being the point on limit, the left and right sides), then the pixel between these two point all is made as the foreground point, continue and finish until this line scanning, begin to scan next line, until last column; The agglomerate that all point after having scanned and foreground point are formed is the prospect agglomerate.
Step 1042: calculate the error image of current frame image and background image, adopt thresholding method that this error image is carried out Threshold Segmentation to obtain the variation prospect in the error image.
Thresholding method is the method for image interior pixel point being cut apart according to threshold value.The choosing method of described threshold value is a lot, and one dimension threshold value, two-dimentional threshold value are arranged.Be example with the simple one dimension fixed threshold of an example below: if the gray-scale value of certain point then is designated as " 1 " to be expressed as the foreground point greater than preset threshold in this error image; Otherwise be designated as " 0 " to be expressed as background dot, obtain the bianry image of prospect thus.
Step 1043: described prospect agglomerate and described variation prospect are carried out AND-operation, with the pixel that belongs to described prospect agglomerate and described variation prospect simultaneously as impact point to obtain and export target.
AND-operation is general a kind of computer operation, if interior certain pixel of image belongs to prospect agglomerate and variation prospect simultaneously particularly, thinks that then this pixel is that impact point then obtains and exports.
As shown in Figure 4, Fig. 4 is the frame diagram according to the object detecting device of removal illumination effect of the present invention.As seen from Figure 4, the object detecting device of removal illumination effect of the present invention comprises:
Background is set up unit 1, is used to set up background image;
Gradient calculation unit 2 is used to calculate and export the gradient of current frame image and background image, and described gradient comprises horizontal direction gradient and vertical gradient;
Prospect profile extraction unit 3 is used for the direction and the amplitude of the gradient of more described current frame image and described background image, and extracts and export prospect profile in view of the above;
Target acquiring unit 4 be used for the prospect profile that extracts is filled obtaining the prospect agglomerate, and filtering noise is with export target.
Compared with prior art, object detection method of the present invention and device have solved the inaccurate and unreliable problem of the detection target that produces owing to illumination effect in the target detection effectively.
What need statement is that foregoing invention content and embodiment are intended to prove the practical application of technical scheme provided by the present invention, should not be construed as the qualification to protection domain of the present invention.Those skilled in the art are in spirit of the present invention and principle, when doing various modifications, being equal to and replacing or improve.Protection scope of the present invention is as the criterion with appended claims.

Claims (6)

1. an object detection method of removing illumination effect is characterized in that, described object detection method may further comprise the steps:
Step 101: adopt statistical method to set up background image;
Step 102: calculate and the gradient of output current frame image and the gradient of described background image, described gradient comprises horizontal direction gradient and vertical gradient;
Step 103: the direction and the amplitude of the gradient of more described current frame image and the gradient of described background image, and extract and export prospect profile in view of the above; With
Step 104: the prospect profile that extracts is filled obtaining the prospect agglomerate, and filtering noise is with export target.
2. object detection method according to claim 1, it is characterized in that, in step 102, adopt gradient operator to calculate horizontal direction gradient, the vertical gradient of described current frame image respectively, and horizontal direction gradient, the vertical gradient of described background image.
3. object detection method according to claim 1 is characterized in that step 103 may further comprise the steps:
Step 1031: according to the gradient magnitude A2 and the gradient direction θ 2 of each pixel in the horizontal direction gradient of the horizontal direction gradient of the described current frame image of step 102 output and vertical gradient and described background image and gradient magnitude A1 that vertical gradient is calculated each pixel in the described current frame image and gradient direction θ 1 and the described background image;
Step 1032: if the pixel in the described current frame image (x, this pixel in gradient magnitude A1 y) and the described background image (x, gradient magnitude A2 y) all 〉=first threshold T1, then change step 1033 over to; If gradient magnitude A1 and A2 all≤the second threshold value T2, then think this pixel (x y) be noise spot, otherwise calculating | A1-A2|; If | A1-A2| 〉=the 3rd threshold value T3, think that then (x y) belongs to the foreground point to this pixel;
Step 1033: calculate this pixel (x in the described current frame image, y) gradient direction θ 1 and interior this pixel (x of described background image, y) absolute difference of gradient direction θ 2 | θ 1-θ 2|, if | θ 1-θ 2| 〉=the 4th threshold value T4, think that then (x y) belongs to the foreground point to this pixel; With
Step 1034: extract the pixel that all belong to the foreground point, thereby obtain prospect profile.
4. object detection method according to claim 3 is characterized in that, first threshold T1 ∈ [8,12], the second threshold value T2 ∈ [3,5], the 3rd threshold value T3 ∈ [4,6], the 4th threshold value T4 ∈ [18 °, 22 °].
5. object detection method according to claim 1 is characterized in that step 104 may further comprise the steps:
Step 1041: the prospect profile to step 103 output is filled to obtain the prospect agglomerate;
Step 1042: calculate the error image of current frame image and background image, adopt thresholding method that this error image is carried out Threshold Segmentation, to obtain the variation prospect in the error image;
Step 1043: described prospect agglomerate and described variation prospect are carried out AND-operation, with the pixel that belongs to described prospect agglomerate and described variation prospect simultaneously as impact point to obtain and export target.
6. an object detecting device of removing illumination effect is characterized in that, described device comprises:
Background is set up the unit, is used to adopt statistical method to set up background image;
The gradient calculation unit is used to calculate and export the gradient of current frame image and the gradient of background image, and described gradient comprises horizontal direction gradient and vertical gradient;
The prospect profile extraction unit is used for the direction and the amplitude of the gradient of the gradient of more described current frame image and described background image, and extracts and export prospect profile in view of the above; And
The target acquiring unit be used for the prospect profile that extracts is filled obtaining the prospect agglomerate, and filtering noise is with export target.
CN2010101951491A 2010-05-31 2010-05-31 Target detection method capable of removing illumination influence and device thereof Active CN101950352B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101951491A CN101950352B (en) 2010-05-31 2010-05-31 Target detection method capable of removing illumination influence and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101951491A CN101950352B (en) 2010-05-31 2010-05-31 Target detection method capable of removing illumination influence and device thereof

Publications (2)

Publication Number Publication Date
CN101950352A true CN101950352A (en) 2011-01-19
CN101950352B CN101950352B (en) 2012-08-22

Family

ID=43453847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101951491A Active CN101950352B (en) 2010-05-31 2010-05-31 Target detection method capable of removing illumination influence and device thereof

Country Status (1)

Country Link
CN (1) CN101950352B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102360513A (en) * 2011-09-30 2012-02-22 北京航空航天大学 Object illumination moving method based on gradient operation
CN102509345A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Portrait art shadow effect generating method based on artist knowledge
CN103700102A (en) * 2013-12-20 2014-04-02 电子科技大学 Rock core target extracting method based on CT (Computed Tomography) images
CN103871049A (en) * 2014-01-08 2014-06-18 香港应用科技研究院有限公司 Edge detection method under non-uniform background light
GB2563142A (en) * 2017-04-20 2018-12-05 Ford Global Tech Llc Image background subtraction for dynamic lighting scenarios

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971382A (en) * 2014-05-21 2014-08-06 国家电网公司 Target detection method avoiding light influences

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101425179A (en) * 2008-11-18 2009-05-06 清华大学 Face image relighting method and device
CN101556739A (en) * 2009-05-14 2009-10-14 浙江大学 Vehicle detecting algorithm based on intrinsic image decomposition
CN101621615A (en) * 2009-07-24 2010-01-06 南京邮电大学 Self-adaptive background modeling and moving target detecting method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101425179A (en) * 2008-11-18 2009-05-06 清华大学 Face image relighting method and device
CN101556739A (en) * 2009-05-14 2009-10-14 浙江大学 Vehicle detecting algorithm based on intrinsic image decomposition
CN101621615A (en) * 2009-07-24 2010-01-06 南京邮电大学 Self-adaptive background modeling and moving target detecting method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《计算机应用》 20100131 王涛等 基于时空背景差的带跟踪补偿目标检测方法 摘要,第2,3部分 1-6 第30卷, 第1期 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102360513A (en) * 2011-09-30 2012-02-22 北京航空航天大学 Object illumination moving method based on gradient operation
CN102509345A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Portrait art shadow effect generating method based on artist knowledge
CN102360513B (en) * 2011-09-30 2013-02-06 北京航空航天大学 Object illumination moving method based on gradient operation
CN102509345B (en) * 2011-09-30 2014-06-25 北京航空航天大学 Portrait art shadow effect generating method based on artist knowledge
CN103700102A (en) * 2013-12-20 2014-04-02 电子科技大学 Rock core target extracting method based on CT (Computed Tomography) images
CN103871049A (en) * 2014-01-08 2014-06-18 香港应用科技研究院有限公司 Edge detection method under non-uniform background light
CN103871049B (en) * 2014-01-08 2017-01-18 香港应用科技研究院有限公司 Edge detection method under non-uniform background light
GB2563142A (en) * 2017-04-20 2018-12-05 Ford Global Tech Llc Image background subtraction for dynamic lighting scenarios
US10373316B2 (en) 2017-04-20 2019-08-06 Ford Global Technologies, Llc Images background subtraction for dynamic lighting scenarios

Also Published As

Publication number Publication date
CN101950352B (en) 2012-08-22

Similar Documents

Publication Publication Date Title
CN104504388B (en) A kind of pavement crack identification and feature extraction algorithm and system
CN101950352B (en) Target detection method capable of removing illumination influence and device thereof
CN103366156B (en) Road structure detect and track
CN104361353B (en) A kind of application of localization method of area-of-interest in instrument monitoring identification
CN103150549B (en) A kind of road tunnel fire detection method based on the early stage motion feature of smog
CN100587717C (en) Medical large transfusion machine vision on-line detection method
CN104537651B (en) Proportion detecting method and system for cracks in road surface image
CN101739549B (en) Face detection method and system
CN102880863B (en) Method for positioning license number and face of driver on basis of deformable part model
CN104616275A (en) Defect detecting method and defect detecting device
CN111382704A (en) Vehicle line-pressing violation judgment method and device based on deep learning and storage medium
CN106887004A (en) A kind of method for detecting lane lines based on Block- matching
CN102122390A (en) Method for detecting human body based on range image
CN109460787A (en) IDS Framework method for building up, device and data processing equipment
CN109087363B (en) HSV color space-based sewage discharge detection method
CN105678213A (en) Dual-mode masked man event automatic detection method based on video characteristic statistics
CN104537688A (en) Moving object detecting method based on background subtraction and HOG features
CN106327464A (en) Edge detection method
CN103914829B (en) Method for detecting edge of noisy image
CN103473779B (en) The detection method of stripe interference and device in image
CN111310576B (en) Channel target passing detection method, device and equipment
CN115240197A (en) Image quality evaluation method, image quality evaluation device, electronic apparatus, scanning pen, and storage medium
CN104866844B (en) A kind of crowd massing detection method towards monitor video
Lin et al. A new prediction method for edge detection based on human visual feature
CN103413138A (en) Method for detecting point target in infrared image sequence

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: NETPOSA TECHNOLOGIES, LTD.

Free format text: FORMER OWNER: BEIJING ZANB SCIENCE + TECHNOLOGY CO., LTD.

Effective date: 20150716

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20150716

Address after: 100102, Beijing, Chaoyang District, Tong Tung Street, No. 1, Wangjing SOHO tower, two, C, 26 floor

Patentee after: NETPOSA TECHNOLOGIES, Ltd.

Address before: 100048 Beijing city Haidian District Road No. 9, building 4, 5 layers of international subject

Patentee before: Beijing ZANB Technology Co.,Ltd.

PP01 Preservation of patent right
PP01 Preservation of patent right

Effective date of registration: 20220726

Granted publication date: 20120822