CN100375530C - Movement detecting method - Google Patents
Movement detecting method Download PDFInfo
- Publication number
- CN100375530C CN100375530C CNB2005100933368A CN200510093336A CN100375530C CN 100375530 C CN100375530 C CN 100375530C CN B2005100933368 A CNB2005100933368 A CN B2005100933368A CN 200510093336 A CN200510093336 A CN 200510093336A CN 100375530 C CN100375530 C CN 100375530C
- Authority
- CN
- China
- Prior art keywords
- monitoring
- area
- sub
- frame image
- motion detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 14
- 238000012544 monitoring process Methods 0.000 claims abstract description 187
- 238000001514 detection method Methods 0.000 claims abstract description 75
- 230000035945 sensitivity Effects 0.000 claims abstract description 24
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 206010000117 Abnormal behaviour Diseases 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Images
Landscapes
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
The present invention discloses a motion detection method which comprises the steps that in a first step, a reference frame image and a current frame image are determined; in a second step, a monitoring area is divided into a plurality of subareas, and correspondingly, the current frame image and the reference frame image are divided into a plurality of corresponding subareas; a valve value of the monitoring area is set, and corresponding valve values and weighting are set for each monitoring subarea according to the sensitivity of each monitoring subareas; in a third step, movement information in each subarea of the current frame image and the reference frame image is obtained, and each monitoring subarea carries out motion detection according to the movement information and the valve values of each monitoring subarea; in a fourth step, the motion detection in the whole monitoring area is determined according to the motion detection result of each subarea, and the weighting of each subarea and the valve value in the whole monitoring area. Because weighting information is added, the subareas are organically combined, and consequently, a motion detection result in the whole monitoring area is accurate and reliable.
Description
Technical Field
The invention relates to the field of digital image processing, in particular to a motion detection method.
Technical Field
In order to ensure the safety of production and life of people, automatic monitoring systems have been widely used to automatically monitor areas set by users. A general monitoring system includes an image capturing unit, an image processing unit, and a display unit. Specifically, the image capturing unit is usually a camera for capturing an image in a set area, the image processing unit may be an image processing chip for processing image data captured by the camera, and the display unit may be a display for displaying an image processed by the image processing chip for monitoring. Generally, the monitoring system may further include an image storage area for storing image data of the monitored area for easy viewing. In many cases, there is no moving object in the set monitoring area for a long time, so it is not necessary to record the monitoring image of the time. In other words, it is necessary to record the monitoring image only when there is a moving object in the monitoring area, so as to save the storage space to the maximum extent, and simultaneously filter out the unnecessary image for the convenience of people to view, therefore, the monitoring system needs to provide a motion detection method to achieve automatic monitoring. It is noted that the motion detection method is not limited to the above-mentioned application.
Generally, existing monitoring systems are set with their own motion detection sensitivity, and in colloquial terms, the monitoring system sets a threshold value, and determines motion if the detected amount of motion is greater than the set threshold value, and determines rest otherwise. The threshold value cannot be set too low, and the too low threshold value may cause an erroneous judgment of the system, for example, only the wind blows leaves, and may also cause the system to make a judgment that there is a moving object. The valve value cannot be set too high, which may miss the moving object that really needs to be detected.
In the prior art, a monitoring system generally monitors the whole visible area of a camera, that is, monitors the motion of the whole visible area, specifically, only sets a threshold value for the whole area, and determines whether there is a moving object by detecting the motion amount of an image of the whole area. Generally speaking, the camera monitoring area is not necessarily an interested area, if there is a moving object in an uninteresting area, for example, the wind-induced swing of a branch, it may also cause misjudgment of the monitoring system, which may affect the motion detection accuracy of the system, and if only the interested area is monitored, it is very bad for timely finding out abnormal behavior.
Therefore, it is desirable to provide a motion detection technique that can effectively solve the above problems.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a motion detection method, which can detect motion in a detection area accurately and timely.
In order to achieve the above object, the present invention provides a motion detection method, comprising the steps of:
determining a reference frame image and a current frame image;
dividing the monitoring area into a plurality of monitoring sub-areas, and correspondingly dividing the current frame image and the reference frame image into a plurality of corresponding monitoring sub-areas; setting a threshold value of a monitoring area, and setting a corresponding threshold value and weight for each monitoring subarea according to the sensitivity of each monitoring subarea;
step three, acquiring motion information in each monitoring subarea of the current frame image and the reference frame image, and performing motion detection on each monitoring subarea according to the motion information and the threshold value of each monitoring subarea; and
and step four, determining the motion detection in the whole monitoring area according to the motion detection result of each monitoring sub-area, the weight of each monitoring sub-area and the threshold value in the whole monitoring area.
Preferably, the principle of the threshold and the weight set for each monitoring subregion according to the sensitivity of each monitoring subregion in the step two is as follows: setting smaller threshold and larger weight for monitoring subareas with higher sensitivity; the less sensitive the monitoring sub-region, the larger the threshold and the smaller the weight are set.
Preferably, the form and the number of the monitoring sub-regions divided in the second step can be set according to requirements, wherein the dividing form includes an average form and an uneven form, and the number is at least two or more.
Preferably, the motion information in the monitoring sub-region in step three is different according to different image data formats, wherein the YUV format selects information based on brightness as the motion information, and the VGA format selects information based on gray value as the motion information.
Preferably, the step three of performing motion detection on each monitoring sub-region according to the threshold value of each monitoring sub-region refers to: and comparing the motion information in each corresponding monitoring sub-area, and comparing the comparison result with the threshold value of each corresponding monitoring sub-area to obtain the motion detection result of the monitoring sub-area.
Preferably, the step four of determining motion detection in the entire monitoring region according to the motion detection result of each monitoring sub-region, the weight of each monitoring sub-region, and the threshold value in the entire monitoring region means: the motion information in the whole monitoring area is calculated according to the following formula:
wherein S represents motion information of the entire monitored area, S i Representing the result of the motion detection of the ith monitored sub-region, alpha i Representing the weight of the ith monitoring subarea, representing the number of the monitoring subareas by j, and determining the motion detection in the whole monitoring area according to the comparison result of the motion information and the threshold value of the whole areaAnd (6) measuring the result.
Preferably, the reference frame image may be one or several images before the image currently acquired by the image capturing device, or may be a specific one in the sequence of images acquired by the image capturing device.
Preferably, the current frame image can be set as follows according to the requirement of the monitor: each image currently captured by the camera may be updated to the current frame image or every other or several images currently captured by the camera may be updated to the current frame image.
Preferably, the acquired motion information of the current frame image is stored as the motion information of the reference frame image for the next motion detection.
Preferably, the sum of the weights of the monitoring subregions is 100%, the weight of the designated monitoring subregion can be set to be 100% according to needs, and the designated subregion can be monitored independently.
The motion detection method of the invention divides the whole monitoring area into a plurality of monitoring sub-areas, and sets corresponding threshold values and weights for each monitoring sub-area according to the sensitivity of each monitoring sub-area, specifically, the monitoring sub-area with higher sensitivity sets smaller threshold values and larger weights, the monitoring sub-area with lower sensitivity sets larger threshold values and smaller weights, so as to comprehensively judge whether motion occurs in the whole monitoring area, thereby effectively reducing the possibility of missing report and false report of the monitoring system, and improving the accuracy and reliability of motion detection of the monitoring system.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a zone division diagram of the present invention; and
fig. 3 is another area division diagram of the present invention.
Detailed Description
Generally, the motion detection method divides the whole monitoring area into a plurality of monitoring sub-areas, and sets a corresponding threshold value and weight for each monitoring sub-area according to the sensitivity of each monitoring sub-area, specifically, the monitoring sub-area with higher sensitivity sets a smaller threshold value and a larger weight, the monitoring sub-area with lower sensitivity sets a larger threshold value and a smaller weight, so as to comprehensively judge whether motion occurs in the whole monitoring area, thereby effectively reducing the possibility of missing report and false report of the monitoring system and improving the accuracy and reliability of motion detection of the monitoring system.
The motion detection method of the present invention is described in detail below with reference to fig. 1.
Step one, determining a reference frame image and determining a current frame image.
As with the physical reference in motion analysis, a reference frame image is first determined in motion detection. The obtaining of the reference frame image may be set as required, and the reference frame may be a previous image or a few previous images of an image currently obtained by the camera device in a preferred embodiment, that is, as the camera device continuously obtains images, the reference frame may be continuously updated, and as for the reference frame, whether it is a previous image or a few previous images of an image currently obtained by the camera device, it may be set as required. In another embodiment, the reference frame image may be a specific one of the image sequences acquired by the camera, for example, a background image if the background is substantially unchanged, which may reduce the amount of computation by the image processor. In other embodiments, the reference frame image may even be an image originally stored in the monitoring system. The reference frame image is determined and stored in the memory of the monitoring system and updated in time.
The current frame image is captured by the image capturing device, however, it is not always necessary that each image captured by the image capturing device is updated to the current frame image, and how to determine the current frame image may be set according to the requirement of the monitor. In another preferred embodiment, every other image or images currently captured by the image capturing device may be updated to the current frame image, which may reduce the amount of computation by the image processor. In other embodiments, the current captured image of the image pickup apparatus every other period of time may be updated to the current frame image.
Dividing the whole monitoring area into a plurality of monitoring sub-areas, and correspondingly dividing the current frame image and the reference frame image into a plurality of corresponding monitoring sub-areas; and setting a threshold value of the whole monitoring area, and setting a corresponding threshold value and weight for each monitoring subarea according to the sensitivity of each monitoring subarea.
The division form of the monitoring area and the number of the divided areas can be set according to the needs of the monitor, and in a preferred embodiment, please refer to fig. 2, the whole monitoring area is divided into 4 × 4 monitoring sub-areas on average. In another embodiment, please refer to fig. 3, the whole monitoring area is divided into 2 × 3 uneven monitoring sub-areas. In other embodiments, the monitor may even divide the entire monitoring area into prisms, triangles, and other regular or irregular shapes, as desired. As shown in FIG. 2, the following briefly describes how to divide the whole monitoring area into different areas by an average division of 4 × 4, taking a photographing device with a resolution of 640 × 480 pixels as an example, wherein 640/4=160, 480/4=120, that is, each monitoring sub-area occupies 160 × 120 pixels, and Q (1, 1) determines the pixels in the monitoring sub-area as Pixel (i, j), wherein i represents the number of rows, j represents the number of columns, 1 ≦ j ≧ 160,1 ≦ i ≧ 120; accordingly, other monitoring sub-regions Q (2, 1), Q (2, 2), etc. may be determined. Once the division form of the whole monitoring area and the number of the divided areas are set, the current frame image and the reference frame image are divided into a plurality of corresponding monitoring sub-areas in this form. Note that the number of divided regions of the entire monitoring region may be 2 or more.
Setting the threshold and the weight of the monitoring sub-region according to the following principle, setting a smaller threshold and a larger weight for the monitoring sub-region with higher sensitivity, wherein the smaller the threshold is, the more sensitive the monitoring sub-region is, and the larger the weight is, the larger the influence of the motion detection result of the monitoring sub-region on the motion detection result of the whole monitoring region is; and setting a larger threshold and a smaller weight for the monitoring subarea with lower sensitivity, wherein the larger threshold represents that the monitoring subarea is less sensitive, and the smaller weight represents that the motion detection result of the monitoring subarea has less influence on the motion detection result of the whole monitoring subarea.
The weight is determined by the above-mentioned principle that the higher the sensitivity is, the higher the weight is, and the other is that the sum of the weights of all the monitoring sub-regions is 1. Taking fig. 3 as an example, a weight setting principle is specifically described below, and as shown in fig. 3, the areas A1 and A2 are "window areas" and "sub-sensitive areas", the areas B2 and B3 are respectively a "door area" and "high-sensitive area", and the areas B1 and A3 are "wall areas" and "non-sensitive areas". The following weights can be set for each monitoring sub-region according to the sensitivity of each monitoring sub-region: b2 and B3 are respectively 25%; a1 and A2 are respectively 20%; the content of A3 and B1 is 5%. How the weight of the monitoring sub-region specifically affects the motion detection result of the entire region will be described in detail in the fourth step below.
According to the difference of the sensitivity, each monitoring sub-area can be set with different thresholds, the monitoring sub-area with higher sensitivity is set with smaller threshold, and conversely, the monitoring sub-area with lower sensitivity is set with larger threshold, and the detailed setting is described in the third step below. In addition, how to determine the threshold value of each sub-monitoring region and the threshold value of the whole monitoring region is the prior art that can be obtained by those skilled in the art without creative efforts, and therefore, the detailed description thereof is omitted here.
And step three, acquiring motion information in each monitoring subarea of the current frame image and the reference frame image, comparing the motion information in each corresponding monitoring subarea, and comparing the comparison result with the threshold value of each corresponding monitoring subarea to obtain the motion detection result of the monitoring subarea.
The above-described motion information differs depending on the image data format.
For example, when the data format is YUV format, because human eyes are sensitive to brightness, the brightness component Y may be selected as a basis, and a brightness histogram is selected as motion information, where the brightness histogram actually reflects a ratio of the number of pixels included in each brightness level of an image. Referring to FIG. 3, suppose that the luminance histogram of the X (any one of the monitored sub-regions A1, A2, A3, B1, B2, B3) in the current frame is calculated to obtain the number of pixels with the luminance value of 255 accounting for 60% of the total number of pixels in the region, and Dcur is used (X) Representing; calculating the brightness histogram of the X monitoring subarea in the reference frame to obtain the pixel with the brightness value of 255 accounting for 30 percent of the total pixels in the area, and using the Dref (X) And (4) showing. Calculating the absolute difference of the motion information between the monitoring sub-areas of the current frame image X and the reference frame image X:
|D cur(x) -D ref(x) |=|60%-30%|=30%,
if the X region is a high-sensitivity region such as B2 or B3, the predetermined threshold is low, which may be 5%, and 30% > 5%, a positive motion detection result may be obtained, and the motion detection result of the monitoring sub-region may be represented as "1". Conversely, if the X region is a low sensitivity region such as A3 or B1, the predetermined threshold is higher, which may be 35%, a negative motion detection result may be obtained, and the motion detection result of the monitoring sub-region may be represented as "0".
For example, the data format is VGA format, and the image is a gray image, so the average gray value can be selected as the motion information. Taking fig. 3 as an example again, the gray level average of each region in the current frame image and the reference frame image is calculated, assuming that the gray level average of the monitoring sub-region in the current frame image X (any one of the monitoring sub-regions in A1, A2, A3, B1, B2, B3) is Pcur (X) =100, and the gray level average Pref (X) =185 in the reference frame image. Solving the absolute difference of the gray values of the X monitoring sub-regions of the current frame image and the reference frame image:
|P cur(x) -P ref(x) |=|185-100|=85,
if the X-monitoring sub-area is a highly sensitive area, the predetermined threshold is low, possibly 50, and 85 > 50, a positive motion detection result can be obtained, and the motion result can be denoted as "1". Conversely, if the X-monitored sub-region is a low sensitive region, and the predetermined threshold is high, perhaps 90, then a negative motion detection result may be obtained, and the motion result may be represented as "0".
It can be seen that although the motion information of the monitored sub-regions differ by the same amount, different motion detection results are caused due to the difference in sensitivity.
It is noted that not every motion detection is required for motion related features in the monitored sub-regions of the reference image to be recalculated. As can be seen from the first step, in some embodiments, the reference frame image in the current motion detection may be updated by the current frame image in the previous motion detection, and the previous "current frame image" has already undergone motion information statistics, so in such embodiments, it is only necessary to store the motion information that has been counted by the previous "current frame image" and update it to the motion information of the reference frame image in the next motion detection, so as to serve as a source of the motion information in each monitored sub-area of the reference frame image in the third step, thereby reducing the computation load of the image processor.
As can be seen from the description of step two and step three, the threshold of the monitored sub-region in the above method for detecting motion in the monitored sub-region is the average threshold of the whole monitored sub-region, and the corresponding statistics of the motion information in the monitored sub-region is also the statistics in the whole monitored sub-region. However, the determination of the threshold value of the monitoring sub-region and the statistics of the motion information in the monitoring sub-region in the method for detecting motion in the monitoring sub-region are not limited to the manners listed in step two and step three. In another embodiment, a line threshold may be set for the monitored sub-region, and correspondingly, the motion information is counted in line units, then the counted motion information of the lines of the monitored sub-region of the reference frame and the current frame is compared to obtain an absolute difference, then the difference is compared with the line threshold to obtain a motion detection result of the line, the above steps are repeated, and the motion detection results of the monitored sub-region are accumulated, so as to obtain the motion detection result of the monitored sub-region. It can be seen that the monitoring sub-area motion detection results can be expressed in various ways set by the monitor.
And step four, determining the motion detection in the whole monitoring area according to the motion detection result of each monitoring sub-area, the weight of each monitoring sub-area and the threshold value in the whole monitoring area.
The motion information in the entire monitored area is calculated according to the following formula:
wherein S represents motion information of the entire monitored area, S i Representing the result of the motion detection, alpha, of the ith monitored sub-region i Represents the weight of the ith monitoring sub-region, and j represents the number of the monitoring sub-regions.
And determining the motion detection result in the whole monitoring area according to the comparison result of the motion information and the threshold value of the whole area.
Taking fig. 3 as an example, it is assumed that the weight set for each monitoring sub-region is: b2 and B3 are respectively 25%;
and 75% is more than 60%, so that the whole area is considered to have moving objects.
If the weight and the threshold value of the whole monitoring area are not changed, the motion detection result of B2, B3 and A1 is '0', and the motion detection result of A2, A3 and B1 is '1', the motion information of the whole monitoring area is:
and 60% is greater than 30%, so that no moving object is considered to be in the entire region.
It can be seen from the above two examples that because the weight information is added, there are three monitoring sub-regions that are positive motion detection results, but the motion detection results of the whole detection region may be opposite. In addition, if a large amount of abnormal motion occurs in the sub-sensitive area, the probability of the abnormal motion occurs in the sensitive area is very high, so that a motion detection signal can be sent out in time, and effective processing time is effectively won for automatic monitoring. Meanwhile, the larger threshold value avoids false alarm of motion detection of a small amount of motion of a peripheral non-sensitive area. The small threshold of the sensitive area can improve the accuracy of motion detection and reduce the possibility of false negative. Therefore, each monitoring sub-region can be treated in a distinguishing way, and each monitoring sub-region is organically combined, so that the motion detection result in the whole monitoring region is more accurate and reliable.
In a specific embodiment, the weight of a certain monitoring sub-area can be set to 100% according to the needs of the monitor, so as to realize the independent monitoring of the designated area.
The first step and the second step of the invention both belong to the process of initializing the monitoring system, so that the two steps have no strict sequence relation.
The above description is intended to be illustrative of the present invention and should not be taken as limiting the invention, as the invention is intended to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present invention.
Claims (10)
1. A motion detection method for performing motion detection of a designated monitoring area, comprising the steps of:
determining a reference frame image and a current frame image;
dividing the whole monitoring area into a plurality of monitoring sub-areas, and correspondingly dividing the current frame image and the reference frame image into a plurality of corresponding monitoring sub-areas; setting a threshold value of the whole monitoring area, and setting a corresponding threshold value and weight for each monitoring subarea according to the sensitivity of each monitoring subarea;
step three, acquiring motion information in each monitoring subarea of the current frame image and the reference frame image, and performing motion detection on each monitoring subarea according to the motion information and the threshold value of each monitoring subarea; and
step four, determining the motion detection in the whole monitoring area according to the motion detection result of each monitoring sub-area, the weight of each monitoring sub-area and the threshold value in the whole monitoring area, specifically: the motion information in the whole monitoring area is calculated according to the following formula:
wherein S represents motion information of the entire monitoring area, S i Representing the result of the motion detection, alpha, of the ith monitored sub-region i Represents the weight of the ith monitoring sub-region, j represents the number of monitoring sub-regions,
and determining a motion detection result in the whole monitoring area according to a comparison result of the motion information of the whole monitoring area and a threshold value of the whole monitoring area.
2. The method of claim 1, wherein: in the second step, the principle of setting the threshold and the weight for each monitoring subarea according to the sensitivity of each monitoring subarea is as follows: setting smaller threshold values and larger weights for monitoring sub-regions with higher sensitivity; the less sensitive the monitoring sub-region, the larger the threshold and the smaller the weight are set.
3. The method of claim 1, wherein: and in the second step, the division form for dividing the monitoring subareas comprises an average form and an uneven form, and the number of the monitoring subareas is more than two.
4. The method of claim 1, wherein: the motion information in the monitoring sub-area in step three may be different according to different image data formats.
5. The method of claim 1, wherein: the step three, performing motion detection on each monitoring sub-region according to the threshold value of each monitoring sub-region, means that: and comparing the motion information in each corresponding monitoring sub-region, and comparing the comparison result with the threshold value of each corresponding monitoring sub-region to obtain the motion detection result of each monitoring sub-region.
6. The method of claim 1, wherein: the reference frame image is the previous image or images of the current image acquired by the camera device, or a fixed image in the image sequence acquired by the camera device.
7. The method of claim 1, wherein: the current frame image is set as follows according to the requirement of the monitor: the camera means is used to update the current frame image every currently captured image or the camera means is used to update the current frame image every other or several captured images.
8. The method of claim 1, wherein: and storing the motion information in each monitoring sub-area of the obtained current frame image as the motion information in each corresponding monitoring sub-area of the reference frame image for the next monitoring sub-area motion detection.
9. The method of claim 1, wherein: the sum of the weights of the individual monitor sub-regions is 100%.
10. The method of claim 4, wherein: selecting information based on brightness as motion information in a monitoring sub-area for the image with the YUV image data format; and the image with the image data format of VGA selects information based on gray values as motion information in the monitoring sub-area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2005100933368A CN100375530C (en) | 2005-08-26 | 2005-08-26 | Movement detecting method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2005100933368A CN100375530C (en) | 2005-08-26 | 2005-08-26 | Movement detecting method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1731855A CN1731855A (en) | 2006-02-08 |
CN100375530C true CN100375530C (en) | 2008-03-12 |
Family
ID=35964137
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2005100933368A Active CN100375530C (en) | 2005-08-26 | 2005-08-26 | Movement detecting method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100375530C (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101252680A (en) * | 2008-04-14 | 2008-08-27 | 中兴通讯股份有限公司 | Terminal and method for supervising with different supervising accuracy |
CN101324959B (en) * | 2008-07-18 | 2011-07-13 | 北京中星微电子有限公司 | Method and apparatus for detecting moving target |
CN101576952B (en) * | 2009-03-06 | 2013-10-16 | 北京中星微电子有限公司 | Method and device for detecting static targets |
CN104954738A (en) * | 2015-04-30 | 2015-09-30 | 广州视声光电有限公司 | Mobile detecting method and mobile detecting device |
CN104966060A (en) * | 2015-06-16 | 2015-10-07 | 广东欧珀移动通信有限公司 | Target identification method and device for moving object |
CN106060340B (en) * | 2016-07-06 | 2018-12-04 | 百味迹忆(厦门)网络科技有限公司 | Mobile detection method and system |
CN109544870B (en) * | 2018-12-20 | 2021-06-04 | 同方威视科技江苏有限公司 | Alarm judgment method for intelligent monitoring system and intelligent monitoring system |
CN111063146A (en) * | 2019-12-18 | 2020-04-24 | 浙江大华技术股份有限公司 | Defense method and equipment for monitoring picture and storage device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1124375A2 (en) * | 2000-01-15 | 2001-08-16 | Samsung Electronics Co., Ltd. | Wireless video monitoring system |
WO2003045062A1 (en) * | 2001-11-19 | 2003-05-30 | Mobiletalk Co., Ltd. | Video monitoring system |
CN2579731Y (en) * | 2002-07-18 | 2003-10-15 | 陈涛 | Radio image warning device for bank-note transport car |
CN1487677A (en) * | 2002-07-18 | 2004-04-07 | 涛 陈 | Radio image monitoring method and system |
CN1564216A (en) * | 2004-03-25 | 2005-01-12 | 浙江工业大学 | Intelligent security device |
-
2005
- 2005-08-26 CN CNB2005100933368A patent/CN100375530C/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1124375A2 (en) * | 2000-01-15 | 2001-08-16 | Samsung Electronics Co., Ltd. | Wireless video monitoring system |
WO2003045062A1 (en) * | 2001-11-19 | 2003-05-30 | Mobiletalk Co., Ltd. | Video monitoring system |
CN2579731Y (en) * | 2002-07-18 | 2003-10-15 | 陈涛 | Radio image warning device for bank-note transport car |
CN1487677A (en) * | 2002-07-18 | 2004-04-07 | 涛 陈 | Radio image monitoring method and system |
CN1564216A (en) * | 2004-03-25 | 2005-01-12 | 浙江工业大学 | Intelligent security device |
Also Published As
Publication number | Publication date |
---|---|
CN1731855A (en) | 2006-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100375530C (en) | Movement detecting method | |
US8611593B2 (en) | Foreground object detection system and method | |
US7982774B2 (en) | Image processing apparatus and image processing method | |
JP4203736B2 (en) | Image motion detection apparatus and computer program | |
JP5480485B2 (en) | Image evaluation method and apparatus | |
JP4729610B2 (en) | Smoke detector | |
US8437504B2 (en) | Imaging system and imaging method | |
US20090219389A1 (en) | Detection of Smoke with a Video Camera | |
JP4111660B2 (en) | Fire detection equipment | |
US7130468B1 (en) | Method for identifying change of scenery and corresponding monitoring device | |
JP2000137877A (en) | Fire detecting device | |
JP2011215804A (en) | Smoke detection device | |
JPH0779429A (en) | Picture monitor equipment | |
CN116760968A (en) | Video playing effect detection method and device and computer readable storage medium | |
US10916016B2 (en) | Image processing apparatus and method and monitoring system | |
US10984536B2 (en) | Motion detection in digital images and a communication method of the results thereof | |
CN111757182A (en) | Image screen-splash detection method, device, computer device and readable storage medium | |
JP3571628B2 (en) | Image processing device | |
US20190325728A1 (en) | Dangerous situation detection method and apparatus using time series analysis of user behaviors | |
CN112347810A (en) | Method and device for detecting moving target object and storage medium | |
JP3268646B2 (en) | Image analysis device | |
CN113711272A (en) | Method and system for non-spurious motion detection | |
US20050031170A1 (en) | Method of image qualification for optical navigation sensor | |
US20240020850A1 (en) | Method and optical motion sensor capable of identifying false motion | |
KR20200028550A (en) | Device and method for determining an emergency situation through object detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20180408 Address after: 100191 Xueyuan Road, Haidian District, Haidian District, Beijing, No. 607, No. six Patentee after: Beijing Vimicro AI Chip Technology Co Ltd Address before: 100083, Haidian District, Xueyuan Road, Beijing No. 35, Nanjing Ning building, 15 Floor Patentee before: Beijing Vimicro Corporation |