CN103714325B - Left object and lost object real-time detection method based on embedded system - Google Patents

Left object and lost object real-time detection method based on embedded system Download PDF

Info

Publication number
CN103714325B
CN103714325B CN201310745099.3A CN201310745099A CN103714325B CN 103714325 B CN103714325 B CN 103714325B CN 201310745099 A CN201310745099 A CN 201310745099A CN 103714325 B CN103714325 B CN 103714325B
Authority
CN
China
Prior art keywords
target
binary map
connected domain
static
lost
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310745099.3A
Other languages
Chinese (zh)
Other versions
CN103714325A (en
Inventor
胡斌
王飞跃
熊刚
李逸岳
陈鹏
蒋剑
田秋常
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Cloud Computing Center of CAS
Original Assignee
Institute of Automation of Chinese Academy of Science
Cloud Computing Center of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science, Cloud Computing Center of CAS filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201310745099.3A priority Critical patent/CN103714325B/en
Publication of CN103714325A publication Critical patent/CN103714325A/en
Application granted granted Critical
Publication of CN103714325B publication Critical patent/CN103714325B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a left object and lost object real-time detection method based on an embedded system. The left object and lost object real-time detection method based on the embedded system comprises the following steps of obtaining video data, extracting a suspicious static target object area on the basis of backgrounds of images, to be detected, of long-and-short period Gaussian mixture model learning video data, and obtaining a target foreground binary image; carrying out accumulation timekeeping on the retention time of the suspicious static target object area according to the target foreground binary image, carrying out triggering to give an alarm if the retention time exceeds a preset threshold in a preset alarm time, and marking size dimension information and specific position information of a suspicious static target object on an original image; determining whether the suspicious static target object area contains a detected target or not through analysis of the shaking conditions of the edge of the suspicious static target object area, and finally obtaining a rectangular target area; distinguishing a left object and a lost object in the rectangular target area containing the detected target, and giving an alarm. The left object and lost object real-time detection method based on the embedded system is low in operation overhead, can be implemented in an embedded SOC leading end, and the requirement for real-time detection can be met.

Description

Legacy based on embedded system and lost-and-found object real-time detection method
Technical field
The present invention relates to the application in terms of safety-security area and intelligent transportation of image procossing and video intelligent analysis, specifically It is related to a kind of legacy based on embedded system and lost-and-found object real-time detection method.
Background technology
With the development of social economy, people are also increasing to the demand of safety precaution.For example, airport, bank, The how intensive important place of ferrum, Waiting Lounge et al., is easily utilized by terrorist etc., generally has very big potential safety hazard, because This needs to carry out real-time monitoring.And traditional monitoring system is substantially only used for retrospectant evidence and presents, but can not real-time detection The generation of anomalous event.The real-time detection of legacy and lost-and-found object is used for assisting in monitoring personnel supervision scene whether there is suspicious item Body is left, and whether the important objects in scene are moved or removed, and have the function of actively monitoring, early warning.
In presently relevant legacy detection technique scheme, mainly real-time pictures are obtained by front-end camera equipment, Analyzed by network transmission to server platform back-end realization, analysis exports real-time pictures after terminating again.Divided based on background intelligent The video intelligent monitoring system of analysis server and monitor supervision platform is complicated, with high costs, and with 720p, 1080p high-definition network The popularization of video camera, the lag issues of network transmission time delay need to face.With embedded system software and hardware function and performance Improve constantly, gradually manifest advantage based on the front end intellectual analysis of embedded system, integrated intelligent analysis process can be expired The requirement of sufficient real-time detection, and significantly alleviate the processing pressure of rear end so that application scenario is more extensive.
The region decision of scene changes is simply left target for suspicious by existing detection technique at present, does not strictly distinguish The intellectual analysis link of legacy and lost-and-found object, legacy refers to by people is left, nobody stationary body of looking after;Lost-and-found object Refer to the stationary body being moved in scene or removing.In current detection method, it is primarily present both sides problem: one is to calculate Method is mainly directed towards server end, the internal memory of occupancy and cpu transportation load is larger is examined in real time on embedded platform it is impossible to be transplanted to Survey, embedded platform requires higher to the method for intellectual analysis;The detection method performance that two is current is still not enough, mostly only suitable Should be for outdoor complicated scene changes less stable, special in the detection of indoor constant scene it is impossible to realize round-the-clock detection It is not the generation that illumination variation and the appearance persisting Pedestrians and vehicles easily cause survey by mistake and empty alarm condition, these problems also need to Further solution.
Content of the invention
For solving above-mentioned technical problem, examine in real time for the legacy and lost-and-found object being currently based on front end embedded system Blank on survey technology field, the present invention proposes one kind effective detection method in real time.The present invention can be integrated in intelligent high definition , there are higher real-time, accuracy in web camera front end, and robustness, and the present invention is applied to various legacies and loss The place of thing automatic detection, and preferable Detection results can be reached.
A kind of legacy proposed by the present invention and loss object detecting method comprise the following steps:
Step 1, acquisition video data;
Step 2, based on long and short cycle mixed Gauss model learn each altimetric image to be checked of video data in background, extract Its target prospect, you can doubtful static target object area, and obtain target prospect binary map;
Step 3, according to described target prospect binary map, the retention time of described Dubious static target object area is carried out Accumulation timing, within the preset alarm time, if retention time t (x, y) is more than a predetermined threshold thmax, then triggering is reported to the police, and handle The size dimension of Dubious static object and more specific location information mark in original image, and output display;
Step 4, by analyze described Dubious static target object area edge shake situation determine whether this region comprises Detection target, finally gives target rectangle region;
Step 5, the differentiation of legacy and lost-and-found object is carried out to the target rectangle region including detection target, and reported Alert.The legacy based on front end embedded system for the present invention and lose object detecting method there is some beneficial effect following:
(1) institute such as Video Capture, real-time detection and triggering warning is functional is all to realize in embedded device front end, significantly Reduce the load of backstage pc server, facilitate the integrated of large-scale monitoring system and extension.
(2) foreground area is analyzed and sorts out, filter out and do not meet the foreground area of static target feature (mainly Persist pedestrian), lower false drop rate, and realize the differentiation of legacy and lost-and-found object.
(3) present invention take into account the situation of scene light sudden change and made respective handling, thus improve the steady of detection Qualitative.
Brief description
Fig. 1 is the legacy based on embedded system for the present invention and the flow chart losing object detecting method;
Fig. 2 is the Detection results figure of lost-and-found object;
Fig. 3 is the Detection results figure of legacy.
Specific embodiment
For making the object, technical solutions and advantages of the present invention become more apparent, below in conjunction with specific embodiment, and reference Accompanying drawing, the present invention is described in more detail.
Fig. 1 is the legacy based on embedded system for the present invention and the flow chart losing object detecting method, as shown in figure 1, The method comprising the steps of:
Step 1, acquisition video data;
This step utilizes the dma technology direct read/write internal memory that high performance embedded system soc hardware structure is supported, from video Directly accessing data in datarams pond, taking out each frame image data after optimizing through isp to carry out real-time processing, thus subtracting Light processor computing load, substantially increases the efficiency of cpu.
Step 2, based on long and short cycle mixed Gauss model learn each altimetric image to be checked of video data in background, extract Its target prospect, you can doubtful static target object area, and obtain target prospect binary map;
In an embodiment of the present invention, this step using based on yuv color space mixed Gaussian foreground detection method Lai Extract target prospect.
Yuv color space is that the image color being most frequently with video describes method, more can effectively go than rgb color space Except impact and the discrimination improving background and prospect of shade, and headend equipment need not carry out the conversion of color space, thus Greatly reduce amount of calculation and conversion time, more save system resource.
Yuv color space is made up of luminance component y and two chromatic component u and v.Luminance component y represents the bright dark of color Degree, for short cycle Gaussian modeling, detection obtains the target object moving and before background has small sample perturbations change The binary map of scape.U and v is independent colour difference signal, and for describing color and the saturation attributes of image, the u of piece image divides Amount and v component are used for carrying out macrocyclic Gaussian modeling, detect suspicious stationary object, because y luminance component does not calculate Including, therefore can reduce the impact to testing result for the factors such as illumination and shade.In long period Gaussian modeling and prospect In detection, in order to reduce the run time of front-end algorithm, the present invention adopts uv chromatic component to replace and detects and combine two frame in front and back The method to judge foreground pixel point for the testing result.
Described step 2 further includes steps of
Step 21, carry out background modeling using mixed Gaussian (gmm) model, set up the mixing of two different update learning rates Gaussian Background model, is long period background model m respectivelylWith short cycle background model ms
Step 22, extract the foreground picture of present frame respectively using mixture Gaussian background model, obtain two back ofs the body of long and short cycle Prospect binary map f of scape modellAnd fs, specific as follows;
Update long period Gaussian Background model m with two chromatic components of uv of picture framel, obtain foreground image simultaneously, Then carry out binaryzation by setting appropriate threshold, binaryzation result is left in the matrix f with present frame formed objectslIn, That is:
Wherein, un-1Represent the image u chromatic component of former frame, ubRepresent u chromatic component Gaussian Background, vnRepresent present frame Image v chromatic component, vbRepresent v chromatic component Gaussian Background, tuvRepresent given threshold.
For the moving object segmentation of short cycle mixed Gaussian background, monochrome information y of input picture frame is updating short week Phase Gaussian Background model ms, using prospect result f of monochrome information detectionsIt is described as:
Wherein, ynRepresent the monochrome information of image, ybRepresent the Gaussian Background of y-component, tyRepresent predetermined threshold.
The foreground information of two mixed Gauss models of summary, can draw the testing result of Dubious static object:
Wherein, 1 represents that testing result is Dubious static object, 0 expression testing result non-Dubious static object.
The step 23 and then subsequent processes such as medium filtering and morphology are carried out for the target prospect binary map obtaining, with Reduce the harmful effect that the interference factors such as noise, too small object are brought, finally give accurately complete Dubious static mesh The prospect binary map of mark thing;
Due in detection scene, it may happen that illuminance abrupt variation situation, the initial prospect binary map obtaining can be because of light Change and produce large area flase drop region.Here to the pixel belonging to prospect in prospect binary map, that is to say f (x, y)=1 Pixel number sumf is counted and is carried out dual threshold judgement, as 0 < sumf≤t1, represent without exception in scene and change or go out Existing moving object, prospect binary map is motion artifacts;As sumf >=t2When, scene information there occurs change it is believed that being that illumination is dashed forward Become, quickly re-establish the background model of mixed Gaussian;Work as t1<sumf<t2When then it is assumed that being tentatively suspicious object to occur, its In, t1And t2For predetermined threshold, and t2>t1.
Step 24, when tentatively determining existing suspicious object thing, connected domain is found to current object prospect binary map And be identified, enter next step.
Step 3, according to described target prospect binary map, the retention time of described Dubious static target object area is carried out Accumulation timing, within the preset alarm time, if retention time t (x, y) is more than a predetermined threshold thmax, then triggering is reported to the police, and handle The size dimension of Dubious static object and more specific location information mark in original image, and output display;
This step carries out the statistics of retention time respectively to legacy and lost-and-found object, and the purpose of do so is mainly considered Different warning trigger conditions should be provided with for legacy and lost-and-found object, when the retention time of suspicious object thing meets certain condition When, it is determined as legacy or lost-and-found object, output is reported to the police, and wherein, t (x, y) is that Dubious static object retains the cumulative time, table It is shown as:
t ( x , y ) = t ( x , y ) + 1 f ( x , y ) = 1 t ( x , y ) - k f ( x , y ) = 0 th max t ( x , y ) > th max ,
Wherein, f (x, y) is the target prospect binary map obtaining in step 2, and k >=0, is parameter preset, threshold value thmaxAccording to Depending on the duration that the frame per second of camera apparatus and concrete legacy and lost-and-found object triggering are reported to the police.
Step 4, by analyze described Dubious static target object area edge shake situation determine whether this region comprises Detection target, finally gives target rectangle region;
Described step 4 further includes steps of
Step 41, the target prospect binary map that obtains is processed for above-mentioned steps 2, find and labelling connects present in it Domain, and preserve the exterior contour marginal information of connected domain;
The situation of change of a certain connected domain exterior contour edge pixel in step 42, statistics certain period of time, connected domain The information of edge contour is saved in two-dimensional matrix m (i, j), and in described two-dimensional matrix m (i, j), contour edge pixel value is set to 1, remaining is then set to 0, and carries out the cumulative statistics of connected domain edge contour information in continuous n width target prospect binary map, obtains To accumulative duration
Step 43, the exterior contour marginal information obtaining and accumulative duration are saved in mask map table as data structure In, what described mask map table reflected is the degree of jitter of edge pixel point, and its value less explanation stability is poorer, and edge is more not Stable, such as, if the degree of jitter in a certain connected regionLess than certain scope, wherein, m (i, j) represents the two-dimensional matrix of image edge information, and s represents the quantity of nonzero value in two-dimensional matrix, and t represents a predetermined threshold, Just it is not considered as that this connected domain is detection target, be deleted, otherwise retain, finally give target rectangle region, which represent something lost Stay concrete size and the positional information of thing and lost-and-found object.
Step 5, the differentiation of legacy and lost-and-found object is carried out to the static target object area including detection target, and carry out Report to the police;
Obtained the target rectangle region of static target by previous step 4 after, come to original image as mask code matrix Carry out unrestrained water filling algorithm to process, in an embodiment of the present invention, select 8 connected region filling modes, will be in original image The pixel being detected as static target, as sub-pixel, to fill region.Described step 5 further includes steps of
Step 51, all pixels pointwise in target rectangle region is scanned;
Step 52, the pixel being 1 prospect binary map intermediate value corresponding in described target rectangle region color image click through The color filling of row consistent difference scope, Fill Color value is setvalue, if the original color value of pixel is setvalue, Then it is left intact, return to step 51;
Step 53, repeat step 51 and step 52 are until all pixels spot scan finishes;
In the connected domain area area2 and original target rectangle region of new filling in step 54, calculating target rectangle region The area area1 of connected domain, the connected domain area of new filling can cover original object connected domain area, if new filling connection Domain area exceeds preset range then it is assumed that the connected domain of current filling belongs to the background of lost-and-found object, then this connected domain is judged to lose Lost article, conversely, being taken as legacy, legacy is labeled as ai, lost-and-found object is labeled as si.
Step 6, detect that in scene, legacy occurs being detained whether again removed situation is sent out by the comparison of background image Raw, such as occur, then terminate to report to the police;In addition, detect that stationary object is removed by the comparison of foreground target subsequently put back to again, and The occurrence of position is moved, such as occurs, then exports corresponding warning message, concrete condition is divided into two kinds;
First, it is directed in scene and the subsequent and removed situation of legacy sometime occurs, methods described is built at the very start Erect background image, and constantly update background image as reference standard, the positional information then current foreground area set up, Set up mask matrix mask, copy current image frame to update background image;After legacy appearance is detected, by present image The region roi of legacy and historical background image carry out differing from division operation and process, by the static mesh of adaptive threshold real-time detection Target variable condition;When legacy is removed, the absolute value very little of each pixel difference division operation, remove be no longer complies with static The legacy area information of goal condition, cancels alarm condition.The renewal of background image independent of Gaussian Background model, to avoid The long-time resting stopping is included Background statistic after terminating by mixed Gauss model life cycle.
2nd, being directed to occurs lost-and-found object to lose, and lost-and-found object is subsequently put back into the situation of monitoring scene, before detection is emerging Whether scape target is mated with history lost-and-found object, if both couplings, cancels lost-and-found object and reports to the police, otherwise, by emerging target It is judged as new legacy:
When detect newly suspicious object occurs in image when, history lost-and-found object is extracted from background image, with work as Front emerging target prospect image roi region compares and mates, such as can in conjunction with the Color Statistical rectangular histogram of image and Style characteristic is mated and is compared, described coupling particularly as follows:
First, with rectangular histogram the quantification gradation in mask code matrix statistical picture region, two chromatic components of u and v are obtained Euler distance is calculated respectively after histogram information:Wherein, n represents that rectangular histogram chrominance information quantifies Dimension, this distance value is less, and the similarity of two target images is higher;
Then, extract the geometric invariant moment that two width mate image, calculate both normalization and amass calculation of correlation d, normalization Long-pending calculation of correlation d value is bigger, and matching degree is higher, and described normalization is amassed calculation of correlation d and is calculated by following formula:
d = &sigma; k = 1 n a k s k [ &sigma; k = 1 n ( a k ) 2 ] 1 / 2 [ &sigma; k = 1 n ( s k ) 2 ] 1 / 2 ,
Wherein, akRepresent the geometric invariant moment losing object area in background image, skRepresent that target area suspicious figure newly occurs The geometric invariant moment of picture, n represents geometric invariant moment number.
Finally, judging whether roi region mates, concrete criterion is for color combining histogram information and shape information:
If the euler distance of 1 two color components all meets d2≤te1, and d >=td1, then meet matching condition;
If calculation of correlation is amassed in 2 normalization meets d >=td2, and d2≤te2, then matching condition, wherein, t are mete1、td1、 te2、td2It is predetermined threshold, and meets: td2>td1And te1<te2.
If meeting the criterion judging above, being considered as currently emerging target and history lost-and-found object is coupling, belongs to Situation about being again put back into after lost-and-found object is removed, then cancel lost-and-found object and report to the police, and update background image, according to static in scene Object space upset condition exports corresponding informance.
Fig. 2 is the lost-and-found object Detection results figure according to one embodiment of the invention, and Fig. 3 is the something lost according to one embodiment of the invention Stay analyte detection design sketch.
Particular embodiments described above, has carried out detailed further to the purpose of the present invention, technical scheme and beneficial effect Describe in detail bright, be should be understood that the specific embodiment that the foregoing is only the present invention, be not limited to the present invention, all Within the spirit and principles in the present invention, any modification, equivalent substitution and improvement done etc., should be included in the guarantor of the present invention Within the scope of shield.

Claims (9)

1. a kind of legacy and loss object detecting method are it is characterised in that the method comprises the following steps:
Step 1, acquisition video data;
Step 2, based on long and short cycle mixed Gauss model learn each altimetric image to be checked of video data in background, extract its mesh Mark prospect, you can doubtful static target object area, and obtain target prospect binary map;
Step 3, according to described target prospect binary map, the retention time of described Dubious static target object area is accumulated Timing, within the preset alarm time, if retention time t (x, y) is more than a predetermined threshold thmax, then triggering warning, and suspicious The size dimension of static target thing and more specific location information mark in original image, and output display;
Step 4, by analyze described Dubious static target object area edge shake situation determine this region whether comprise detect Target, finally gives target rectangle region;
Step 5, the differentiation of legacy and lost-and-found object is carried out to the target rectangle region including detection target, and reported to the police;
Wherein, using following formula, accumulative timing carries out for retention time:
t ( x , y ) = t ( x , y ) + 1 f ( x , y ) = 1 t ( x , y ) - k f ( x , y ) = 0 th max t ( x , y ) > th max ,
Wherein, f (x, y) represents target prospect binary map, and k >=0, is parameter preset, thmaxFor predetermined threshold.
2. method according to claim 1 is it is characterised in that described step 2 further includes steps of
Step 21, carry out background modeling using mixed Gauss model, set up the mixed Gaussian background of two different update learning rates Model: long period background model mlWith short cycle background model ms
Step 22, extract the foreground picture of present frame respectively using mixture Gaussian background model, obtain two background moulds of long and short cycle Prospect binary map f of typelAnd fs, according to prospect binary map flAnd fs, detect and obtain Dubious static object;
Step 23, medium filtering and Morphological scale-space are carried out for the target prospect binary map obtaining;
Step 24, connected domain is found to current object prospect binary map and is identified.
3. method according to claim 2 is it is characterised in that prospect binary map flObtain particularly as follows: with the u of picture frame, Two chromatic components of v are updating long period Gaussian Background model ml, obtain foreground image simultaneously, then pass through to set appropriate threshold Carry out binaryzation, binaryzation result is left in the matrix f with present frame formed objectslIn, that is, obtain prospect binary map fl:
Wherein, un-1Represent the image u chromatic component of former frame, ubRepresent u chromatic component Gaussian Background, vnRepresent the figure of present frame As v chromatic component, vbRepresent v chromatic component Gaussian Background, tuvRepresent given threshold.
4. method according to claim 2 is it is characterised in that prospect binary map fsAcquisition particularly as follows: for the short cycle mix Close the moving object segmentation of Gaussian Background, monochrome information y of input picture frame is updating short cycle Gaussian Background model ms, utilize Monochrome information detection obtains prospect binary map fs:
Wherein, ynRepresent the monochrome information of image, ybRepresent the Gaussian Background of y-component, tyRepresent predetermined threshold.
5. method according to claim 2 is it is characterised in that in described step 22, according to prospect binary map flAnd fs, inspection Record Dubious static object to be expressed as:
Wherein, 1 represents that testing result is Dubious static object, 0 expression testing result non-Dubious static object.
6. method according to claim 1 is it is characterised in that described step 4 further includes steps of
Step 41, for described target prospect binary map, find and labelling connected domain present in it, and preserve the outer of connected domain Contouring marginal information;
The situation of change of a certain connected domain exterior contour edge pixel in step 42, statistics certain period of time, connected domain edge The information of profile is saved in two-dimensional matrix m (i, j), and carries out connected domain edge wheel in continuous n width target prospect binary map The cumulative statistics of wide information, obtains accumulative duration
Step 43, the exterior contour marginal information obtaining and accumulative duration are saved in mask map table as data structure, Obtain the degree of jitter in a certain connected region, if the degree of jitter in a certain connected region is less than certain scope, just not Think that this connected domain is detection target, be deleted, otherwise retain, finally give target rectangle region.
7. method according to claim 6 is it is characterised in that in described two-dimensional matrix m (i, j), contour edge pixel value It is set to 1, remaining is then set to 0.
8. method according to claim 1 is it is characterised in that described step 5 further includes steps of
Step 51, all pixels pointwise in target rectangle region is scanned;
Step 52, the pixel that prospect binary map intermediate value corresponding in described target rectangle region color image is 1 is carried out solid Determine the color filling of difference range;
Step 53, repeat step 51 and step 52 are until all pixels spot scan finishes;
The face of connected domain in the connected domain area newly filled in step 54, calculating target rectangle region and original target rectangle region Long-pending, if newly filling connected domain area exceeds preset range then it is assumed that the connected domain of current filling belongs to the background of lost-and-found object, should Connected domain is judged to lost-and-found object, conversely, being taken as legacy.
9. method according to claim 1 is it is characterised in that methods described also includes the comparison detection by background image In scene, there is being detained again removed situation in legacy, and/or detects that stationary object is removed by the comparison of foreground target Whether situation about subsequently again putting back to occurs, and such as occurs, then cancel and reporting to the police.
CN201310745099.3A 2013-12-30 2013-12-30 Left object and lost object real-time detection method based on embedded system Expired - Fee Related CN103714325B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310745099.3A CN103714325B (en) 2013-12-30 2013-12-30 Left object and lost object real-time detection method based on embedded system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310745099.3A CN103714325B (en) 2013-12-30 2013-12-30 Left object and lost object real-time detection method based on embedded system

Publications (2)

Publication Number Publication Date
CN103714325A CN103714325A (en) 2014-04-09
CN103714325B true CN103714325B (en) 2017-01-25

Family

ID=50407285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310745099.3A Expired - Fee Related CN103714325B (en) 2013-12-30 2013-12-30 Left object and lost object real-time detection method based on embedded system

Country Status (1)

Country Link
CN (1) CN103714325B (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105404847B (en) * 2014-09-16 2019-01-29 北京计算机技术及应用研究所 A kind of residue real-time detection method
CN104599458A (en) * 2014-12-05 2015-05-06 柳州市瑞蚨电子科技有限公司 Wireless intelligent video surveillance system based warning method
CN106408554B (en) * 2015-07-31 2019-07-09 富士通株式会社 Residue detection device, method and system
CN106612385B (en) * 2015-10-22 2019-09-06 株式会社理光 Video detecting method and video detecting device
CN105427281A (en) * 2015-11-04 2016-03-23 北京格灵深瞳信息技术有限公司 Change area detection method and device
CN105427303B (en) * 2015-11-18 2018-09-21 国网江苏省电力有限公司检修分公司 A kind of vision measurement and method of estimation of substation's legacy
CN105447863B (en) * 2015-11-19 2018-01-23 中国科学院自动化研究所 A kind of remnant object detection method based on improvement VIBE
CN105740814B (en) * 2016-01-29 2018-10-26 重庆扬讯软件技术股份有限公司 A method of determining solid waste dangerous waste storage configuration using video analysis
CN106128023A (en) * 2016-07-18 2016-11-16 四川君逸数码科技股份有限公司 A kind of wisdom gold eyeball identification foreign body leaves over alarm method and device
CN105957300B (en) * 2016-07-18 2019-04-30 四川君逸数码科技股份有限公司 A kind of wisdom gold eyeball identification is suspicious to put up masking alarm method and device
CN106846357A (en) * 2016-12-15 2017-06-13 重庆凯泽科技股份有限公司 A kind of suspicious object detecting method and device
CN106878674B (en) * 2017-01-10 2019-08-30 哈尔滨工业大学深圳研究生院 A kind of parking detection method and device based on monitor video
CN107145851A (en) * 2017-04-28 2017-09-08 西南科技大学 Constructions work area dangerous matter sources intelligent identifying system
CN109698895A (en) * 2017-10-20 2019-04-30 杭州海康威视数字技术股份有限公司 A kind of analog video camera, monitoring system and data transmission method for uplink
CN110782425A (en) * 2018-07-13 2020-02-11 富士通株式会社 Image processing method, image processing device and electronic equipment
CN110798592B (en) * 2019-10-29 2022-01-04 普联技术有限公司 Object movement detection method, device and equipment based on video image and storage medium
CN111062273B (en) * 2019-12-02 2023-06-06 青岛联合创智科技有限公司 Method for tracing, detecting and alarming remaining articles
CN111079612A (en) * 2019-12-09 2020-04-28 北京国网富达科技发展有限责任公司 Method and device for monitoring retention of invading object in power transmission line channel
CN111079621B (en) * 2019-12-10 2023-10-03 北京百度网讯科技有限公司 Method, device, electronic equipment and storage medium for detecting object
CN111160187B (en) * 2019-12-20 2023-05-02 浙江大华技术股份有限公司 Method, device and system for detecting left-behind object
CN111369529B (en) * 2020-03-04 2021-05-14 厦门星纵智能科技有限公司 Article loss and leave-behind detection method and system
CN111415347B (en) * 2020-03-25 2024-04-16 上海商汤临港智能科技有限公司 Method and device for detecting legacy object and vehicle
CN112183277A (en) * 2020-09-21 2021-01-05 普联国际有限公司 Detection method and device for abandoned object and lost object, terminal equipment and storage medium
CN112560655A (en) * 2020-12-10 2021-03-26 瓴盛科技有限公司 Method and system for detecting masterless article
CN112633184A (en) * 2020-12-25 2021-04-09 成都商汤科技有限公司 Alarm method and device, electronic equipment and storage medium
CN113554008B (en) * 2021-09-18 2021-12-31 深圳市安软慧视科技有限公司 Method and device for detecting static object in area, electronic equipment and storage medium
CN114022468B (en) * 2021-11-12 2022-05-13 珠海安联锐视科技股份有限公司 Method for detecting article left-over and lost in security monitoring
CN114495006A (en) * 2022-01-26 2022-05-13 京东方科技集团股份有限公司 Detection method and device for left-behind object and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509075A (en) * 2011-10-19 2012-06-20 北京国铁华晨通信信息技术有限公司 Remnant object detection method and device
CN102831384A (en) * 2011-06-13 2012-12-19 索尼公司 Method and device for detecting discards by video
CN102890778A (en) * 2011-07-21 2013-01-23 北京新岸线网络技术有限公司 Content-based video detection method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7359538B2 (en) * 2001-11-23 2008-04-15 R2 Technology Detection and analysis of lesions in contact with a structural boundary

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831384A (en) * 2011-06-13 2012-12-19 索尼公司 Method and device for detecting discards by video
CN102890778A (en) * 2011-07-21 2013-01-23 北京新岸线网络技术有限公司 Content-based video detection method and device
CN102509075A (en) * 2011-10-19 2012-06-20 北京国铁华晨通信信息技术有限公司 Remnant object detection method and device

Also Published As

Publication number Publication date
CN103714325A (en) 2014-04-09

Similar Documents

Publication Publication Date Title
CN103714325B (en) Left object and lost object real-time detection method based on embedded system
WO2021208275A1 (en) Traffic video background modelling method and system
CN110135269B (en) Fire image detection method based on mixed color model and neural network
CN104392468B (en) Based on the moving target detecting method for improving visual background extraction
CN102665071B (en) Intelligent processing and search method for social security video monitoring images
CN101833838B (en) Large-range fire disaster analyzing and early warning system
CN106897720A (en) A kind of firework detecting method and device based on video analysis
US20150356745A1 (en) Multi-mode video event indexing
CN110032977A (en) A kind of safety warning management system based on deep learning image fire identification
CN112800860B (en) High-speed object scattering detection method and system with coordination of event camera and visual camera
CN105469105A (en) Cigarette smoke detection method based on video monitoring
CN105046218B (en) A kind of multiple features traffic video smog detection method based on serial parallel processing
CN103955705A (en) Traffic signal lamp positioning, recognizing and classifying method based on video analysis
CN109255326B (en) Traffic scene smoke intelligent detection method based on multi-dimensional information feature fusion
Guo et al. Image-based seat belt detection
CN101286239A (en) Aerial shooting traffic video frequency vehicle rapid checking method
CN112017445B (en) Pedestrian violation prediction and motion trail tracking system and method
CN109087363B (en) HSV color space-based sewage discharge detection method
CN103034843A (en) Method for detecting vehicle at night based on monocular vision
CN102610104B (en) Onboard front vehicle detection method
CN102902960A (en) Leave-behind object detection method based on Gaussian modelling and target contour
CN105046948A (en) System and method of monitoring illegal traffic parking in yellow grid line area
Malhi et al. Vision based intelligent traffic management system
CN111079621A (en) Method and device for detecting object, electronic equipment and storage medium
CN112489055A (en) Satellite video dynamic vehicle target extraction method fusing brightness-time sequence characteristics

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170125