CN112633179A - Farmer market aisle object occupying channel detection method based on video analysis - Google Patents

Farmer market aisle object occupying channel detection method based on video analysis Download PDF

Info

Publication number
CN112633179A
CN112633179A CN202011557632.XA CN202011557632A CN112633179A CN 112633179 A CN112633179 A CN 112633179A CN 202011557632 A CN202011557632 A CN 202011557632A CN 112633179 A CN112633179 A CN 112633179A
Authority
CN
China
Prior art keywords
farmer
difference
scene
contour
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011557632.XA
Other languages
Chinese (zh)
Inventor
郑宏弟
聂立功
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Youquan Technology Development Co ltd
Original Assignee
Hangzhou Youquan Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Youquan Technology Development Co ltd filed Critical Hangzhou Youquan Technology Development Co ltd
Priority to CN202011557632.XA priority Critical patent/CN112633179A/en
Publication of CN112633179A publication Critical patent/CN112633179A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention discloses a farmer market aisle object occupying channel detection method based on video analysis, which comprises the following steps of: s1, establishing a background picture list, acquiring a background picture of an object without an illegal lane occupation through a farmer market camera, judging whether a new scene exists, if not, maintaining the original scene background picture, and not updating the background picture list; if the scene is newly added, updating the background picture list; s2, obtaining a current farmer scene graph and a background graph according to real-time stream drawing of a farmer market camera, and comparing to obtain a difference between the two graphs; s3, finding out the outline and the position information of the expanded differential object through an outline extraction algorithm in the computer vision technology; and S4, acquiring the current farmer scene graph of the farmer market camera at time intervals, comparing the obtained current farmer scene graph with the image similarity calculation algorithm in the digital image processing technology, and judging whether the difference object is the road occupying object according to the score value.

Description

Farmer market aisle object occupying channel detection method based on video analysis
Technical Field
The invention belongs to the technical field of video analysis, and particularly relates to a farmer market aisle object lane occupying detection method based on video analysis.
Background
In the farm trade market, customers fall down, people are inconvenient to walk around and the cleanliness of the farm trade market is affected due to the fact that the aisle is narrowed when the merchant stores objects in the aisle. Especially, in the case of fire, earthquake and other emergencies, the occupation of the object can also become a main reason that evacuation and rescue cannot be carried out in time.
Most of farm trade markets have built or are building traditional monitoring systems, but the traditional monitoring systems cannot effectively solve the problem that the farm trade markets violate the road occupation, and can increase more workload. Passive human monitoring and post playback check show that problems cannot be found at the first time, the video content is complex, the quality is poor, careful discrimination cannot be realized, real-time monitoring and playback retrieval of massive video images need to consume a large amount of human power, the real-time monitoring and playback retrieval depend on the responsibility and mental state of monitoring personnel, and the monitoring screen cannot be watched by the working personnel for 7 to 24 hours.
In view of the above, there is a need to develop an effective method for automatically identifying the occupation of objects in the passageways of farm trade markets.
Disclosure of Invention
In view of the above technical problems, the present invention is directed to providing a method for detecting occupation of an object in a farm trade market aisle based on video analysis, wherein a background map is compared with a monitoring map through collected video monitoring data and based on an artificial intelligence video analysis technology, and whether an illegal occupation behavior exists is intelligently analyzed.
In order to solve the technical problems, the invention adopts the following technical scheme:
a farmer market aisle object occupying channel detection method based on video analysis comprises the following steps:
s1, establishing a background picture list, acquiring a background picture of an object without an illegal lane occupation through a farmer market camera, judging whether a new scene exists, if not, maintaining the original scene background picture, and not updating the background picture list; if the newly added scene exists, maintaining the original scene background image according to the requirement, selecting the newly added scene image with the highest recognition rate as the newly added background image, and updating the background image list;
s2, obtaining a current farmer scene graph and a background graph according to real-time stream drawing of a farmer market camera, and obtaining a difference part between the two graphs through a difference value algorithm in a computer vision technology;
s3, finding out the outline and the position information of the expanded differential object through an outline extraction algorithm in the computer vision technology;
and S4, acquiring the current farmer scene graph of the farmer market camera at time intervals, processing the obtained difference object position and the farmer scene graph of S2 through S2 and S3, comparing the obtained difference object position and the obtained difference object scene graph with each other through a picture similarity calculation algorithm in the digital picture processing technology, and judging whether the difference object is a road occupying object or not according to the score value.
Preferably, the difference between the two pictures in S2 is implemented by: defining a background image array src1, a farmer scene image array src2 and a difference result dst, comparing dst (I) c ═ src1(I) c-src2(I) c |, returning a result to represent the difference between the background image and the farmer scene image, enhancing the pixel intensity of the difference between the two images by threshold binarization, and then expanding the enhanced difference pixels by a dilation algorithm to obtain the shape of the difference object.
Preferably, in S3, the method for obtaining the contour and position information by the difference object includes:
obtaining a picture matrix F ═ F { i, j } of the shape of the difference object from S2, initializing NBD to 1, obtaining a contour from the contour starting point (i, j) by the contour tracing algorithm, assigning a new unique number to each newly found contour B, NBD representing the number of the currently traced contour,
using raster scanning method, scanning the image matrix F from left to right and from top to bottom, when the gray value F (i, j) of a certain pixel point (i, j) is scanned! When the pixel point is equal to 0, executing a contour tracking algorithm, and judging whether the pixel point belongs to the current contour; and each time when the initial position of a new line of the picture is scanned, the LNBD is set to be 1, the judgment is continuously carried out, the LNBD stores the number of the last contour B' which is met recently in the raster scanning process, and after the whole image matrix F is scanned, the contour information and the positions of all objects in the image can be obtained.
Preferably, in S4, the method for determining the similarity between two images includes:
defining the input two picture matrixes as x and y, respectively comparing the brightness, the contrast and the structure of the two pictures through the following structural similarity formula to obtain a score value between 0 and 1, wherein the larger the score value is, the higher the similarity of the two pictures is, the threshold value is set to be 0.85, when the score value is between 0.85 and 1, the different object is judged to be a road occupying object, and the ssim formula is
Figure BDA0002859225590000031
Where l (x, y) is a brightness comparison, c (x, y) is a contrast comparison, s (x, y) is a texture comparison, μxAnd muyRespectively represent the mean values of x, y,. sigmaxAnd σyRepresents the standard deviation, σ, of x and y, respectivelyxyRepresents the covariance of x and y, and c1、c2And c3The constants are respectively, so that system errors caused by the denominator being 0 are avoided.
Preferably, further comprising:
and S5, sending the occupied object and the position information to the violation warning background.
The invention has the following beneficial effects: the scheme based on deep learning needs a large amount of training data, the embodiment of the invention is additionally provided with the background picture library, and whether the illegal occupation exists can be effectively and quickly judged only by a difference analysis algorithm of the monitoring frame picture and the background picture, so that the pressure of the server and the workload for acquiring the training data are saved, whether the farmer market passage is occupied or not is accurately positioned, and the illegal behaviors are early warned.
Drawings
Fig. 1 is a flowchart of steps of a farmer market aisle object occupation detection method based on video analysis according to an embodiment of the present invention;
fig. 2 is a picture matrix diagram in the farm product market aisle object lane occupation detection method based on video analysis according to the embodiment of the present invention.
Fig. 3 is a background diagram of farm product market in an application example of the video analysis-based farm product market aisle object occupation detection method according to the embodiment of the present invention;
fig. 4 is a view of an illegal lane occupation scene in an application example of the farm product market aisle object lane occupation detection method based on video analysis according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Referring to fig. 1, a flow chart showing steps of a farmer market aisle object occupation detection method based on video analysis according to an embodiment of the present invention is shown, including the following steps:
s1, establishing a background picture list, acquiring a background picture of an object without an illegal lane occupation through a farmer market camera, judging whether a new scene exists, if not, maintaining the original scene background picture, and not updating the background picture list; if the newly added scene exists, maintaining the original scene background image according to the requirement, selecting the newly added scene image with the highest recognition rate as the newly added background image, and updating the background image list;
s2, obtaining a current farmer scene graph and a background graph according to real-time stream drawing of a farmer market camera, and obtaining a difference part between the two graphs through a difference value algorithm in a computer vision technology;
s3, finding out the outline and the position information of the expanded differential object through an outline extraction algorithm in the computer vision technology;
and S4, acquiring the current farmer scene graph of the farmer market camera at time intervals, processing the obtained difference object position and the farmer scene graph of S2 through S2 and S3, comparing the obtained difference object position and the obtained difference object scene graph with each other through a picture similarity calculation algorithm in the digital picture processing technology, and judging whether the difference object is a road occupying object or not according to the score value.
In a specific application example, the implementation method of the difference between the two pictures in S2 is as follows: defining a background image array src1, a farmer scene image array src2, a difference result dst, comparing two images dst (I) c ═ src1(I) c-src2(I) c |, returning a result representing the difference between the background image and the farmer scene image, enhancing the pixel intensity of the difference between the two images by threshold binarization, and then expanding the enhanced difference pixel by an expansion algorithm to obtain a difference object shape, i.e. a picture matrix F ═ F { I, j }, as shown in fig. 2.
In a specific application example, in S3, the method for obtaining contour and position information through a difference object includes:
obtaining a picture matrix F ═ F { i, j } of the shape of the difference object from S2, initializing NBD to 1, obtaining a contour from the contour starting point (i, j) by the contour tracing algorithm, assigning a new unique number to each newly found contour B, NBD representing the number of the currently traced contour,
using raster scanning method, scanning the image matrix F from left to right and from top to bottom, when the gray value F (i, j) of a certain pixel point (i, j) is scanned! When the pixel point is equal to 0, executing a contour tracking algorithm, and judging whether the pixel point belongs to the current contour; and each time when the initial position of a new line of the picture is scanned, the LNBD is set to be 1, the judgment is continuously carried out, the LNBD stores the number of the last contour B' which is met recently in the raster scanning process, and after the whole image matrix F is scanned, the contour information and the positions of all objects in the image can be obtained.
Implementation of the contour tracking algorithm:
step 1, scanning the picture matrix from left to right and from top to bottom to obtain an outer contour starting point and an inner contour starting point.
(1) Acquiring an outer contour starting point: if f (i, j) and f (i, j-1) ═ 0, (i, j) is the starting point of the outer contour, NBD +1, (i2, j2) ═ i, j-1;
(2) obtaining an inner contour starting point: if f (i, j) > is 1 and f (i, j +1) is 0, (i, j) is the starting point of the inner contour, NBD +1, (i2, j2) is (i, j + 1); otherwise, jumping to the step 4;
step 2, if the last contour B 'contains the currently newly encountered contour B, the contour B' is the father contour of the current contour B;
step 3, marking the contour information as the same contour information through a contour tracking algorithm step;
(3.1) starting from (i2, j2), finding a non-zero point clockwise centered on (i, j) of (i1, j1), and if-NBD is not assigned to f (i, j), jumping to step 4;
(3.2)(i2,j2)=(i1,j1),(i3,j3)=(i,j);
(3.3) starting from (i2, j2), finding a non-zero point (i4, j4) counterclockwise centered at (i3, j 3);
(3.4) changing the value of f (i, j) according to (i3, j3), i.e. the pixel currently scanned, if f (i3, j3+1) is 0, then f (i3, j3) is-NBD, if f (i3, j3+1) |! 0 (possibly positive or negative) and f (i3, j3) 1, then f (i3, j3) NBD, otherwise the value is not changed;
(3.5) if f (i4, j4) ═ i, j) and (i3, j3) ═ i1, j1 represents returning to the origin, jump to (4). Otherwise, (i3, j3) ═ i4, j4, (i2, j2) ═ i3, j 3);
step 4, if f (i, j)! LNBD ← f (i, j) |, continues scanning from (i, j +1) to the bottom right-most pixel.
In a specific application embodiment, in S4, the implementation method for determining the similarity between two pictures includes:
defining the input two picture matrixes as x and y, respectively comparing the brightness, the contrast and the structure of the two pictures through the following structural similarity formula to obtain a score value between 0 and 1, wherein the larger the score value is, the higher the similarity of the two pictures is, the threshold value is set to be 0.85, when the score value is between 0.85 and 1, the different object is judged to be a road occupying object, and the ssim formula is
Figure BDA0002859225590000061
Where l (x, y) is a brightness comparison, c (x, y) is a contrast comparison, s (x, y) is a texture comparison, μxAnd muyRespectively represent the mean values of x, y,. sigmaxAnd σyRepresents the standard deviation, σ, of x and y, respectivelyxyRepresents the covariance of x and y, and c1、c2And c3The constants are respectively, so that system errors caused by the denominator being 0 are avoided.
Through the farm trade market aisle object occupation detection method based on video analysis in the above embodiment, referring to fig. 3 as a farm trade market background diagram, and fig. 4 as a violation occupation scene diagram corresponding to fig. 3, the electric vehicle on the right side of the aisle will be analyzed as an occupation object.
Example 2
On the basis of embodiment 1, the embodiment of the present invention further includes: and S5, sending the occupied object and the position information to the violation warning background.
Through directly sending the lane occupying object and the position information to the violation warning background, the daily management of the farmer market is greatly facilitated.
According to the embodiment of the invention, whether illegal occupation exists can be effectively and quickly judged only by a difference analysis algorithm of the monitoring frame picture and the background picture, so that the server pressure and the workload of acquiring training data are saved, whether occupation exists in a farmer market passageway is accurately positioned, and illegal behaviors are early warned.
It is to be understood that the exemplary embodiments described herein are illustrative and not restrictive. Although one or more embodiments of the present invention have been described with reference to the accompanying drawings, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.

Claims (5)

1. A farmer market aisle object lane occupying detection method based on video analysis is characterized by comprising the following steps:
s1, establishing a background picture list, acquiring a background picture of an object without an illegal lane occupation through a farmer market camera, judging whether a new scene exists, if not, maintaining the original scene background picture, and not updating the background picture list; if the newly added scene exists, maintaining the original scene background image according to the requirement, selecting the newly added scene image with the highest recognition rate as the newly added background image, and updating the background image list;
s2, obtaining a current farmer scene graph and a background graph according to real-time stream drawing of a farmer market camera, and obtaining a difference part between the two graphs through a difference value algorithm in a computer vision technology;
s3, finding out the outline and the position information of the expanded differential object through an outline extraction algorithm in the computer vision technology;
and S4, acquiring the current farmer scene graph of the farmer market camera at time intervals, processing the obtained difference object position and the farmer scene graph of S2 through S2 and S3, comparing the obtained difference object position and the obtained difference object scene graph with each other through a picture similarity calculation algorithm in the digital picture processing technology, and judging whether the difference object is a road occupying object or not according to the score value.
2. The farmer market aisle object lane occupancy detection method based on video analysis of claim 1, wherein the difference between the two pictures in S2 is implemented by: defining a background image array src1, a farmer scene image array src2 and a difference result dst, comparing dst (I) c ═ src1(I) c-src2(I) c |, returning a result to represent the difference between the background image and the farmer scene image, enhancing the pixel intensity of the difference between the two images by threshold binarization, and then expanding the enhanced difference pixels by a dilation algorithm to obtain the shape of the difference object.
3. The farmer market aisle object lane occupation detection method based on video analysis as claimed in claim 2, wherein in S3, the contour and position information obtained by the difference object are implemented by:
obtaining a picture matrix F ═ F { i, j } of the shape of the difference object from S2, initializing NBD to 1, obtaining a contour from the contour starting point (i, j) by the contour tracing algorithm, assigning a new unique number to each newly found contour B, NBD representing the number of the currently traced contour,
using raster scanning method, scanning the image matrix F from left to right and from top to bottom, when the gray value F (i, j) of a certain pixel point (i, j) is scanned! When the pixel point is equal to 0, executing a contour tracking algorithm, and judging whether the pixel point belongs to the current contour; and each time when the initial position of a new line of the picture is scanned, the LNBD is set to be 1, the judgment is continuously carried out, the LNBD stores the number of the last contour B' which is met recently in the raster scanning process, and after the whole image matrix F is scanned, the contour information and the positions of all objects in the image can be obtained.
4. The farmer market aisle object lane occupation detection method based on video analysis as claimed in claim 2, wherein in S4, the implementation method for judging the similarity between two pictures is as follows:
defining the input two picture matrixes as x and y, respectively comparing the brightness, the contrast and the structure of the two pictures through the following structural similarity formula to obtain a score value between 0 and 1, wherein the larger the score value is, the higher the similarity of the two pictures is, the threshold value is set to be 0.85, when the score value is between 0.85 and 1, the different object is judged to be a road occupying object, and the ssim formula is
Figure FDA0002859225580000021
Where l (x, y) is a brightness comparison, c (x, y) is a contrast comparison, s (x, y) is a texture comparison, μxAnd muyRespectively represent the mean values of x, y,. sigmaxAnd σyRepresents the standard deviation, σ, of x and y, respectivelyxyRepresents the covariance of x and y, and c1、c2And c3The constants are respectively, so that system errors caused by the denominator being 0 are avoided.
5. The farmer market aisle object lane detection method based on video analysis of any of claims 1 to 4, further comprising:
and S5, sending the occupied object and the position information to the violation warning background.
CN202011557632.XA 2020-12-25 2020-12-25 Farmer market aisle object occupying channel detection method based on video analysis Pending CN112633179A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011557632.XA CN112633179A (en) 2020-12-25 2020-12-25 Farmer market aisle object occupying channel detection method based on video analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011557632.XA CN112633179A (en) 2020-12-25 2020-12-25 Farmer market aisle object occupying channel detection method based on video analysis

Publications (1)

Publication Number Publication Date
CN112633179A true CN112633179A (en) 2021-04-09

Family

ID=75324842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011557632.XA Pending CN112633179A (en) 2020-12-25 2020-12-25 Farmer market aisle object occupying channel detection method based on video analysis

Country Status (1)

Country Link
CN (1) CN112633179A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113450003A (en) * 2021-07-02 2021-09-28 中标慧安信息技术股份有限公司 Method and system for monitoring business occupation in market
CN115272984A (en) * 2022-09-29 2022-11-01 江西电信信息产业有限公司 Method, system, computer and readable storage medium for detecting lane occupation operation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103000030A (en) * 2012-11-28 2013-03-27 敖卓森 Snap-photograph method and device of bus lane occupation
CN106373426A (en) * 2016-09-29 2017-02-01 成都通甲优博科技有限责任公司 Computer vision-based parking space and illegal lane occupying parking monitoring method
CN110298837A (en) * 2019-07-08 2019-10-01 上海天诚比集科技有限公司 Fire-fighting road occupying exception object detecting method based on frame differential method
CN110443196A (en) * 2019-08-05 2019-11-12 上海天诚比集科技有限公司 Fire-fighting road occupying detection method based on SSIM algorithm
CN111163294A (en) * 2020-01-03 2020-05-15 重庆特斯联智慧科技股份有限公司 Building safety channel monitoring system and method for artificial intelligence target recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103000030A (en) * 2012-11-28 2013-03-27 敖卓森 Snap-photograph method and device of bus lane occupation
CN106373426A (en) * 2016-09-29 2017-02-01 成都通甲优博科技有限责任公司 Computer vision-based parking space and illegal lane occupying parking monitoring method
CN110298837A (en) * 2019-07-08 2019-10-01 上海天诚比集科技有限公司 Fire-fighting road occupying exception object detecting method based on frame differential method
CN110443196A (en) * 2019-08-05 2019-11-12 上海天诚比集科技有限公司 Fire-fighting road occupying detection method based on SSIM algorithm
CN111163294A (en) * 2020-01-03 2020-05-15 重庆特斯联智慧科技股份有限公司 Building safety channel monitoring system and method for artificial intelligence target recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘天睿: ""suzuki轮廓跟踪算法原理"", 《HTTPS://WWW.CNBLOGS.COM/LIUTIANRUI1/ARTICLES/10281465.HTML》 *
韩功等: "使用物体交互模型的车辆违停事件检测", 《电视技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113450003A (en) * 2021-07-02 2021-09-28 中标慧安信息技术股份有限公司 Method and system for monitoring business occupation in market
CN115272984A (en) * 2022-09-29 2022-11-01 江西电信信息产业有限公司 Method, system, computer and readable storage medium for detecting lane occupation operation

Similar Documents

Publication Publication Date Title
CN110135269B (en) Fire image detection method based on mixed color model and neural network
Shah et al. Video background modeling: recent approaches, issues and our proposed techniques
US10445590B2 (en) Image processing apparatus and method and monitoring system
Avgerinakis et al. Recognition of activities of daily living for smart home environments
CN111091109B (en) Method, system and equipment for predicting age and gender based on face image
CN110033040B (en) Flame identification method, system, medium and equipment
CN109918971B (en) Method and device for detecting number of people in monitoring video
US8290277B2 (en) Method and apparatus for setting a lip region for lip reading
CN107659754B (en) Effective concentration method for monitoring video under condition of tree leaf disturbance
WO2023082784A1 (en) Person re-identification method and apparatus based on local feature attention
US20050139782A1 (en) Face image detecting method, face image detecting system and face image detecting program
CN114241548A (en) Small target detection algorithm based on improved YOLOv5
CN111738054B (en) Behavior anomaly detection method based on space-time self-encoder network and space-time CNN
CN110096945B (en) Indoor monitoring video key frame real-time extraction method based on machine learning
CN111488805B (en) Video behavior recognition method based on salient feature extraction
CN112633179A (en) Farmer market aisle object occupying channel detection method based on video analysis
CN108038486A (en) A kind of character detecting method
CN107832732B (en) Lane line detection method based on treble traversal
Hafiz et al. Foreground segmentation-based human detection with shadow removal
US11836960B2 (en) Object detection device, object detection method, and program
CN109740527B (en) Image processing method in video frame
Park et al. Bayesian rule-based complex background modeling and foreground detection
CN116543333A (en) Target recognition method, training method, device, equipment and medium of power system
CN115601684A (en) Emergency early warning method and device, electronic equipment and storage medium
CN106530300A (en) Flame identification algorithm of low-rank analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210409