CN109726691B - Monitoring method - Google Patents

Monitoring method Download PDF

Info

Publication number
CN109726691B
CN109726691B CN201811648286.9A CN201811648286A CN109726691B CN 109726691 B CN109726691 B CN 109726691B CN 201811648286 A CN201811648286 A CN 201811648286A CN 109726691 B CN109726691 B CN 109726691B
Authority
CN
China
Prior art keywords
neural network
foreground image
neurons
neuron
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811648286.9A
Other languages
Chinese (zh)
Other versions
CN109726691A (en
Inventor
金涛
江浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui rungu Technology Co.,Ltd.
Original Assignee
Anhui Rungu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Rungu Technology Co ltd filed Critical Anhui Rungu Technology Co ltd
Priority to CN201811648286.9A priority Critical patent/CN109726691B/en
Priority to CN202011170798.6A priority patent/CN112232265A/en
Publication of CN109726691A publication Critical patent/CN109726691A/en
Application granted granted Critical
Publication of CN109726691B publication Critical patent/CN109726691B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a monitoring method, which comprises the following steps: acquiring a shot video frame sequence; extracting a foreground image of a current video according to the video frame sequence based on a pre-trained neural network model; removing the shadow in the foreground image according to a preset shadow removing method; judging whether an abnormal hole exists in the foreground image, if so, filling the abnormal hole according to a preset abnormal hole filling method; and acquiring a circumscribed rectangle of the foreground image, and highlighting the circumscribed rectangle. The method can quickly and accurately obtain the accurate shape of the non-background image in the video picture, thereby realizing the functions of suspicion comparison, key picture display, picture center display and the like. The method has high intelligence, high accuracy of the obtained graph and wide application prospect.

Description

Monitoring method
Technical Field
The invention relates to the field of monitoring, in particular to a monitoring method.
Background
The intelligent monitoring system adopts image processing, mode recognition and computer vision technology, filters useless or interference information of video pictures by means of powerful data processing capacity of a computer through an intelligent video analysis module added in the monitoring system, automatically recognizes different objects, analyzes and extracts key useful information in a video source, quickly and accurately positions an accident site, judges abnormal conditions in the monitored pictures, sends out an alarm or triggers other actions in a fastest and optimal mode, and therefore, the intelligent monitoring system effectively performs pre-warning, in-situ processing and timely evidence obtaining after the accident.
The intelligent monitoring in the prior art has been widely applied, but the related technologies of extracting images from flowing video frames and automatically processing the extracted images are not mature, so that the complete automation is difficult to realize and the identification by human eyes is required. In view of the fact that the related technology for automatically processing the image is not mature, the false alarm rate of automatically alarming based on the image processing technology is high, and the accurate outline and the accurate position of the suspected target are difficult to obtain.
Disclosure of Invention
In order to solve the above technical problem, the present invention provides a monitoring method. The invention is realized by the following technical scheme:
a method of monitoring, comprising:
acquiring a shot video frame sequence;
extracting a foreground image of a current video according to the video frame sequence based on a pre-trained neural network model;
removing the shadow in the foreground image according to a preset shadow removing method;
judging whether an abnormal hole exists in the foreground image, if so, filling the abnormal hole according to a preset abnormal hole filling method;
and acquiring a circumscribed rectangle of the foreground image, and highlighting the circumscribed rectangle.
Further, the foreground image can be compared with pictures in a preset suspicion picture library to obtain picture similarity, and if the picture similarity exceeds a preset threshold value, an alarm prompt is given.
Further, the simple method for acquiring the circumscribed rectangle is as follows:
extracting a first endpoint and a second endpoint of the foreground image on a horizontal axis;
extracting a third endpoint and a fourth endpoint of the foreground image on a longitudinal axis;
making a first tangent of the first endpoint in the foreground image; a second tangent to the foreground image at the second endpoint; a third tangent to the foreground image at the third endpoint; a fourth tangent to the foreground image at the fourth endpoint;
the first tangent line, the second tangent line, the third tangent line and the fourth tangent line are intersected to form a circumscribed rectangle.
Further, the circumscribed rectangle is a minimum circumscribed rectangle; the center of the minimum bounding rectangle is highlighted.
Further, the removing the shadow in the foreground image according to a preset shadow removing method includes:
acquiring a preset multidirectional mapping table and a multidirectional mapping map set, wherein the multidirectional mapping table records the corresponding relation among an illumination time period, an illumination intensity period, resolution and a characteristic threshold; a plurality of background maps are recorded in the multidirectional mapping map set, the feature set of each background map is different, and the feature set comprises the illumination time period, the illumination intensity period and the resolution of the background map;
selecting a target background image from the multidirectional mapping image set according to the current illumination time period, the illumination intensity period and the shooting equipment;
selecting a target characteristic threshold value from the multidirectional mapping table according to the current illumination time period, the illumination intensity period and the shooting equipment;
and removing shadows in the foreground image according to the target background image and the target feature threshold.
Further, the removing the shadow in the foreground image according to the target background map and the target feature threshold comprises:
obtaining the brightness angle difference of each pixel according to the target background image and the foreground image;
and determining the pixel points with the brightness angle difference smaller than the target characteristic threshold value as shadow areas and removing the shadow areas.
The embodiment of the invention provides a monitoring method in detail, which can quickly and accurately obtain the accurate shape of a non-background image in a video picture, thereby realizing the functions of suspicion comparison, key picture display, picture center display and the like. The method has high intelligence, high accuracy of the obtained graph and wide application prospect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flow chart of a monitoring method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a neural network construction method provided by an embodiment of the present invention;
FIG. 3 is a flow chart of a method for generating a neural network by generating parameters of the neural network according to an embodiment of the present invention;
fig. 4 is a flowchart of a method for generating a neural network by using the basic neurons as a clustering center according to a preset generation rule according to an embodiment of the present invention;
FIG. 5 is a flow chart of calculating a state transition matrix of the neural network according to an embodiment of the present invention;
FIG. 6 is a flowchart of a method for removing shadows according to an embodiment of the present invention;
fig. 7 is a flowchart of a method for determining an abnormal hole according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The invention provides a monitoring method, as shown in fig. 1, the method comprises:
s101, acquiring a video frame sequence shot by shooting equipment; the shooting equipment is fixed at a specific position, and the shooting angle of the shooting equipment does not change along with time.
Specifically, in the embodiment of the present invention, the video frame sequence may be obtained from existing shooting equipment such as a dome camera and a gun camera, where the shooting equipment is fixed at a specific position and the shooting angle does not change with time. Specifically, the sequence of video frames is a sequence of color vectors, which are vectors of an RGB vector space.
And S102, extracting a foreground image of the current video according to the video frame sequence based on a pre-trained neural network model.
S103, removing the shadow in the foreground image according to a preset shadow removing method.
And S104, judging abnormal holes in the foreground images caused by the foreground image extraction step, and if so, filling the abnormal holes according to a preset abnormal hole filling method.
Specifically, the foreground image extracting step is step S102. Due to the foreground image extraction, holes may be generated in some scenes, and even if the foreground extraction method is improved, the holes are difficult to completely eliminate. The hole is different from the hole of the object in the foreground image, and is an abnormal hole which is required to be filled; correspondingly, the void of the object does not need to be filled, and how to distinguish the abnormal void is a problem to be solved.
In particular, the abnormal hole filling method may use the related art.
And S105, acquiring a circumscribed rectangle of the foreground image, and highlighting the circumscribed rectangle.
In a possible embodiment, the simple method for obtaining the circumscribed rectangle is as follows:
extracting a first endpoint and a second endpoint of the foreground image on a horizontal axis;
extracting a third endpoint and a fourth endpoint of the foreground image on a longitudinal axis;
making a first tangent of the first endpoint in the foreground image; a second tangent to the foreground image at the second endpoint; a third tangent to the foreground image at the third endpoint; a fourth tangent to the foreground image at the fourth endpoint;
the first tangent line, the second tangent line, the third tangent line and the fourth tangent line are intersected to form a circumscribed rectangle.
In other possible embodiments, the circumscribed rectangle may also be the smallest circumscribed rectangle. The center of the minimum bounding rectangle is highlighted.
Further, the foreground image can be compared with pictures in a preset suspicion picture library to obtain picture similarity, and if the picture similarity exceeds a preset threshold value, an alarm prompt is given.
The foreground image extraction process is influenced by a plurality of complex factors such as illumination, background disturbance and the like, so that the embodiment of the invention preferably adopts a neural network to extract the foreground image from the video frame sequence, and improves the robustness of the extraction process on the basis of big data training. The training process of the neural network is largely the same as or different from that of the prior art, and therefore, the embodiment of the present invention is not described in detail. Different neural networks behave differently during machine learning, and in order to accommodate the specific needs of embodiments of the present invention, embodiments of the present invention preferably provide a specific neural network. The neural network constructed by the embodiment of the invention has the following characteristics:
the neural network satisfies the following formula x (n +1) ═ W1u(n+1)+W2x(n)+W3y (n); wherein x and y are eachFor input and output, W1,W2,W3And respectively representing the conversion matrixes from the current input, the current state and the current output of the neural network to the next state of the neural network.
In particular, W1,W2,W3Is not changed by the learning process of the neural network, and W1,W3Are all equal to W2It is related. In fact, W of the neural network1,W2,W3Three self-parameter matrix contents are correlated to determine a state transformation matrix W2A uniquely determined neural network is obtained, and W does not need to be known in the actual learning and using process of the neural network1,W3The actual value of (c). State transition matrix W2To characterize the internal parameters of the neural network construction. The relation between the input and the output of the neural network is uniquely determined by an input and output matrix, and the input and output matrix is obtained through training.
The neural network belongs to a neural network generated by performing biological simulation on human brain, so that the dynamic characteristics are stronger, the coupling degree between neurons is lower, and more intelligent expression can be realized in the machine learning process.
In order to obtain such a neural network, an embodiment of the present invention provides a preferred construction method, as shown in fig. 2, including:
s1, obtaining neural network generation parameters, wherein the generation parameters comprise neuron clustering numbers, neuron density degree parameters, distribution space size parameters and neuron total numbers.
Specifically, the neuron clustering number, the neuron density parameter, the distribution space size parameter and the total number of neurons belong to known parameters, and the specific content depends on the requirements of the user.
And S2, generating a neural network according to the neural network generation parameters.
S3, calculating a state transformation matrix W of the neural network2And the state transformation matrix is used for acquiring the internal state of the neural network at the next moment according to the current internal state of the neural network.
After the neural network is successfully constructed, the neural network should be further trained, an input/output mapping matrix is obtained in the training process, the input/output mapping matrix can uniquely determine output according to input, and a specific training method can refer to the prior art. The input and output of the neural network have a unique determined relationship Y ═ WoutX, only the input and output mapping matrix W is determined by using a neural network training method in the prior artoutAnd (4) finishing.
The generating a neural network according to the neural network generation parameter, as shown in fig. 3, includes:
and S21, obtaining the basic neurons according to the neuron clustering number.
And S22, generating a neural network by taking the basic neurons as a clustering center according to a preset generation rule, wherein the number of the neurons in the neural network is the same as the total number of the neurons, each neuron in the neural network is bidirectionally interconnected with the adjacent neurons, and each neuron in the neural network is connected with the neuron per se according to a preset probability.
Wherein, the meaning that each neuron in the neural network is connected with the neuron by a preset probability is as follows: the ratio of the number of the neurons with self-feedback connection in the neural network to the total number of the neurons is a preset probability.
And S23, setting an input node and an output node which are connected with the neural network.
Further, in order to facilitate the generation of the neural network, the embodiment of the present invention may first generate a layout diagram of the neural network on the smart device, and represent the interconnection relationship of each neuron of the neural network by using the layout diagram. Therefore, the embodiment of the present invention further discloses a method for obtaining a basic neuron according to the neuron clustering number from the perspective of a layout, including: acquiring a left upper corner boundary A and a right lower corner boundary B of the rectangular layout; connecting the upper left corner boundary A and the lower right corner boundary B to obtain an oblique diagonal line; and dividing the diagonal line into N equal parts, wherein N is the neuron clustering number, and the equal division points are the basic neurons.
Further, the generating a neural network by using the basic neurons as a clustering center according to a preset generation rule is shown in fig. 4, and includes:
s221, generating new neurons randomly in the rectangular layout, and enabling the new neurons p to be newnewActive and its surrounding pre-existing neurons piAccording to the probability P (new, i) ═ k e-μd(new,i)And connecting, wherein kappa and mu are a neuron density parameter and a distribution space size parameter respectively, and d (new, i) is the Euclidean distance between the newly added neuron and the existing neuron.
S222. simultaneously, existing neurons p around the neurons piAccording to the probability P (new, i) ═ k e-μd(new,i)Active and newly added neurons pnewAnd (4) connecting.
S223, judging the newly added neuron pnewWhether or not to contact at least one of the existing neurons piGenerating bidirectional interconnection, if so, reserving the newly added neuron, wherein the newly added neuron becomes an existing neuron; and if not, deleting the newly added neurons.
In the process of constructing the bidirectionally interconnected neurons, the connection probability of the newly added neurons and the neurons nearby the newly added neurons is inversely related to the distance, so that a neural network with a large number of neurons close to the basic neurons and a small number of neurons far from the basic neurons can be formed.
The calculating of the state transformation matrix W of the neural network2As shown in fig. 5, includes:
s231, selecting the basic neurons close to the center of the rectangular layout as reference points, and calculating the distances between other neurons and the reference points.
S232, arranging the neurons according to an ascending order, wherein the positions of the neurons in the ordering result are the state transformation matrix W of the neurons2The number in (1).
And S233, setting cluster center numbers for each basic neuron, and determining the numbers of clusters to which the neurons belong.
In particular, it can be according to formula Ci=argmin(d(Ni,Zc) To obtain the number of the cluster to which each neuron belongs, where CiIdentifying neurons NiNumber of cluster to which it belongs, ZcCoordinates of the basic neurons numbered for clustering c, d (N)i,Zc) Is neuron NiAnd basal neuron ZcIs measured in the same manner as described above.
S234, calculating the connection strength among the neurons with the interconnection relationship, and obtaining a state transformation matrix W according to the connection strength2
In particular, the state transition matrix W2The calculation method comprises the following steps:
s2341, calculating any two neurons Ni,NjThe interrelationship between them.
In particular, if the two neurons Ni,NjIf the coordinates are the same, the mutual relationship is a class relationship; if the two neurons Ni,NjIf the coordinates are different but belong to the same cluster, the mutual relationship is a two-class relationship, otherwise, the relationship is a three-class relationship.
S2342, obtaining the two neurons N according to the correlationi,NjCorrelated state transition matrix W2Value w of element(s)ij
Obtaining a connection strength parameter change interval alpha E < -t corresponding to a type of relation1,t1]The change interval beta E-t of the connection strength parameter corresponding to the second kind of relation2,t2]And the change interval gamma E-t of the connection strength parameter corresponding to the three types of relations3,t3];
The element values are determined from the interrelationships.
In particular, the amount of the solvent to be used,
Figure BDA0001932448270000091
wherein
Figure BDA0001932448270000092
Specifically, the setting of α is related to the coupling degree of the neuron clusters, and can be adjusted according to actual needs, and the setting of β and γ is related to the stability of the neural network, and also needs to be adjusted according to actual needs.
Further, on the basis of obtaining the foreground image, an embodiment of the present invention further discloses a method for removing a shadow as shown in fig. 6, including:
s1031, acquiring a preset multidirectional mapping table and a multidirectional mapping atlas, wherein the multidirectional mapping table records the corresponding relation among the illumination time period, the illumination intensity period, the resolution and the characteristic threshold; a plurality of background maps are recorded in the multidirectional mapping map set, the feature set of each background map is different, and the feature set comprises the illumination time period, the illumination intensity period and the resolution of the background map.
The inventor of the embodiment of the invention finds that a certain characteristic of a shadow area and a non-shadow area in a foreground image has a jump change in the process of studying shadow and optical expression, and defines the characteristic value as a brightness angle difference in the embodiment of the invention
Figure BDA0001932448270000093
Wherein
Figure BDA0001932448270000094
Respectively, a color vector of a certain pixel in a corresponding background image and a color vector of the pixel in a current foreground image. Specifically, the background map is related to an illumination time period, an illumination intensity period, and a resolution, and the brightness angle difference as a threshold value for distinguishing shadow from non-shadow is also related to the illumination time period, the illumination intensity period, and the resolution.
In order to perform shading processing based on this finding, in the embodiment of the present invention, a multi-directional map set corresponding to the shooting position of the shooting device is obtained in advance. The multi-directional mapping atlas records images obtained under scenes of different illumination time periods, different illumination intensity periods and different resolutions under the condition of no pedestrian, and the images are used as background images.
Further, based on the statistical result, the embodiment of the present invention obtains a multi-way mapping table in advance, where the multi-way mapping table is used to query a feature threshold according to an illumination time period, an illumination intensity period, and a resolution, and the feature threshold is used to distinguish shadow pixels from non-shadow pixels in a foreground.
S1032, selecting a target background image from the multidirectional mapping image set according to the current illumination time period, the illumination intensity period and the shooting equipment.
S1033, selecting a target characteristic threshold value from the multidirectional mapping table according to the current illumination time period, the illumination intensity period and the shooting equipment.
S1034, removing the shadow in the foreground image according to the target background image and the target feature threshold value.
Specifically, the removing the shadow in the foreground image according to the target background image and the target feature threshold includes:
s10341, obtaining the brightness angle difference of each pixel according to the target background image and the foreground image.
S10342, determining the pixel points with the brightness angle difference smaller than the target characteristic threshold as shadow areas and removing the shadow areas.
Before filling the abnormal hole, an embodiment of the present invention preferably further provides a method for determining an abnormal hole, as shown in fig. 7, where the method includes:
and S10, acquiring a pixel difference value L (x, y) of a pixel I (x, y) in the foreground image at the current moment and a corresponding target background image B (x, y) of the pixel I (x, y) -B (x, y).
S20, obtaining an abnormal cavity judgment threshold T (x, y) at the current moment.
And S30, if the pixel difference value is larger than the judgment threshold value, judging that the pixel belongs to an abnormal cavity.
In fact, if a pedestrian is shot in the video, the pedestrian has a high probability of moving, and the motion and the stillness of the pedestrian have a great influence on the accuracy of the determination of the abnormal hole, so that the target background image B (x, y) and the abnormal hole determination threshold T (x, y) are both time-dependent in a preferred embodiment of the present invention, and specifically, the time-dependent relationship is as follows:
Figure BDA0001932448270000111
where γ is a constant that does not change over time, and can be done empiricallyAnd (4) setting.
Further, before obtaining the pixel difference value, the method further includes:
and acquiring a preset multidirectional parameter set, wherein the multidirectional parameter set records the corresponding relation among the illumination time period, the illumination intensity period, the resolution and the abnormal cavity judgment basic threshold.
Specifically, Bt(x,y),TtAnd the initial values of (x, y) are respectively a target background image selected from the multidirectional mapping image set according to the current illumination time period, the illumination intensity section and the shooting equipment, and an abnormal hole judgment basic threshold value selected from the multidirectional parameter set according to the current illumination time period, the illumination intensity section and the shooting equipment.
According to the relationship between the target background image B (x, y) and the abnormal hole judgment threshold T (x, y) and time, the content of the object corresponding to the foreground image is not changed when the object moves, and the content of the object corresponding to the foreground image is updated when the object is static. Of course, if a change in the illumination period, illumination intensity period, or photographed device occurs during the updating thereof, the value thereof will be reinitialized.
The embodiment of the invention provides a monitoring method in detail, and a detailed technical scheme for filling abnormal holes in the monitoring method, so that the accurate shape of a non-background image in a video picture can be quickly and accurately obtained, and the functions of suspect comparison, key picture display, picture center display and the like can be realized. The method has high intelligence, high accuracy of the obtained graph and wide application prospect.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (6)

1. A method of monitoring, comprising:
acquiring a shot video frame sequence;
extracting a foreground image of a current video according to the video frame sequence based on a pre-trained neural network model;
removing the shadow in the foreground image according to a preset shadow removing method;
judging whether an abnormal hole exists in the foreground image, if so, filling the abnormal hole according to a preset abnormal hole filling method;
acquiring a circumscribed rectangle of the foreground image, and highlighting the circumscribed rectangle;
the neural network construction method comprises the following steps:
acquiring neural network generation parameters, wherein the generation parameters comprise neuron clustering number, neuron density parameter, distribution space size parameter and neuron total number;
generating a neural network according to the neural network generation parameters;
calculating a state transformation matrix of the neural network, wherein the state transformation matrix is used for acquiring the next internal state of the neural network according to the current internal state of the neural network;
the generating a neural network according to the neural network generation parameters includes:
obtaining basic neurons according to the neuron clustering number;
generating a neural network by taking the basic neurons as a clustering center according to a preset generation rule, wherein the number of the neurons in the neural network is the same as the total number of the neurons, each neuron in the neural network is bidirectionally interconnected with the adjacent neurons, and each neuron in the neural network is connected with the neuron per se according to a preset probability;
wherein, the meaning that each neuron in the neural network is connected with the neuron by a preset probability is as follows: the ratio of the number of the neurons with self-feedback connection in the neural network to the total number of the neurons is a preset probability;
setting an input node and an output node connected with the neural network;
the generating a neural network by taking the basic neurons as clustering centers according to a preset generating rule comprises the following steps:
randomly generating new neurons in the rectangular layout chart, and enabling the new neurons to be generated
Figure 53411DEST_PATH_IMAGE001
Active and its surrounding existing neurons
Figure 523707DEST_PATH_IMAGE002
According to probability
Figure 660290DEST_PATH_IMAGE003
Is connected, wherein
Figure 798011DEST_PATH_IMAGE004
Are a neuron density degree parameter and a distribution space size parameter respectively,
Figure 994637DEST_PATH_IMAGE005
the Euclidean distance between the newly added neuron and the existing neuron;
while the existing neurons in its surroundings
Figure 952228DEST_PATH_IMAGE002
According to probability
Figure 889573DEST_PATH_IMAGE003
Active and newly added neurons
Figure 616220DEST_PATH_IMAGE001
Connecting;
judging the newly added neuron
Figure 514906DEST_PATH_IMAGE001
Whether or not to contact at least one of the existing neurons
Figure 694215DEST_PATH_IMAGE002
Generating bidirectional interconnection, if so, reserving the newly added neuron, wherein the newly added neuron becomes an existing neuron; and if not, deleting the newly added neurons.
2. The method of claim 1, wherein:
and comparing the foreground image with pictures in a preset suspicion picture library to obtain picture similarity, and if the picture similarity exceeds a preset threshold value, giving an alarm.
3. The method of claim 1, wherein:
the simple acquisition method of the circumscribed rectangle comprises the following steps:
extracting a first endpoint and a second endpoint of the foreground image on a horizontal axis;
extracting a third endpoint and a fourth endpoint of the foreground image on a longitudinal axis;
making a first tangent of the first endpoint in the foreground image; a second tangent to the foreground image at the second endpoint; a third tangent to the foreground image at the third endpoint; a fourth tangent to the foreground image at the fourth endpoint;
the first tangent line, the second tangent line, the third tangent line and the fourth tangent line are intersected to form a circumscribed rectangle.
4. The method of claim 1, wherein:
the external rectangle is the minimum external rectangle; the center of the minimum bounding rectangle is highlighted.
5. The method of claim 1, wherein:
the removing the shadow in the foreground image according to a preset shadow removing method comprises the following steps:
acquiring a preset multidirectional mapping table and a multidirectional mapping map set, wherein the multidirectional mapping table records the corresponding relation among an illumination time period, an illumination intensity period, resolution and a characteristic threshold; a plurality of background maps are recorded in the multidirectional mapping map set, the feature set of each background map is different, and the feature set comprises the illumination time period, the illumination intensity period and the resolution of the background map;
selecting a target background image from the multidirectional mapping image set according to the current illumination time period, the illumination intensity period and the shooting equipment;
selecting a target characteristic threshold value from the multidirectional mapping table according to the current illumination time period, the illumination intensity period and the shooting equipment;
and removing shadows in the foreground image according to the target background image and the target feature threshold.
6. The method of claim 5, wherein:
the removing the shadow in the foreground image according to the target background map and the target feature threshold comprises:
obtaining a brightness angle difference of each pixel according to the target background image and the foreground image, wherein the brightness angle difference is defined as
Figure 172601DEST_PATH_IMAGE006
Wherein
Figure 19334DEST_PATH_IMAGE007
Respectively being a certain imageThe color vector of the pixel in the corresponding background image and the color vector of the pixel in the current foreground image; and determining the pixel points with the brightness angle difference smaller than the target characteristic threshold value as shadow areas and removing the shadow areas.
CN201811648286.9A 2018-12-30 2018-12-30 Monitoring method Active CN109726691B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811648286.9A CN109726691B (en) 2018-12-30 2018-12-30 Monitoring method
CN202011170798.6A CN112232265A (en) 2018-12-30 2018-12-30 High-accuracy monitoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811648286.9A CN109726691B (en) 2018-12-30 2018-12-30 Monitoring method

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202011170798.6A Division CN112232265A (en) 2018-12-30 2018-12-30 High-accuracy monitoring method

Publications (2)

Publication Number Publication Date
CN109726691A CN109726691A (en) 2019-05-07
CN109726691B true CN109726691B (en) 2020-12-04

Family

ID=66299467

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011170798.6A Withdrawn CN112232265A (en) 2018-12-30 2018-12-30 High-accuracy monitoring method
CN201811648286.9A Active CN109726691B (en) 2018-12-30 2018-12-30 Monitoring method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202011170798.6A Withdrawn CN112232265A (en) 2018-12-30 2018-12-30 High-accuracy monitoring method

Country Status (1)

Country Link
CN (2) CN112232265A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236606A (en) * 2008-03-07 2008-08-06 北京中星微电子有限公司 Shadow cancelling method and system in vision frequency monitoring
CN101635852A (en) * 2009-08-26 2010-01-27 北京航空航天大学 Method for detecting real-time moving object based on adaptive background modeling
CN102509101A (en) * 2011-11-30 2012-06-20 昆山市工业技术研究院有限责任公司 Background updating method and vehicle target extracting method in traffic video monitoring
CN103208126A (en) * 2013-04-17 2013-07-17 同济大学 Method for monitoring moving object in natural environment
CN104318263A (en) * 2014-09-24 2015-01-28 南京邮电大学 Real-time high-precision people stream counting method
CN104657712A (en) * 2015-02-09 2015-05-27 惠州学院 Method for detecting masked person in monitoring video
CN107944392A (en) * 2017-11-25 2018-04-20 周晓风 A kind of effective ways suitable for cell bayonet Dense crowd monitor video target mark
CN108154518A (en) * 2017-12-11 2018-06-12 广州华多网络科技有限公司 A kind of method, apparatus of image procossing, storage medium and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI368185B (en) * 2008-11-06 2012-07-11 Ind Tech Res Inst Method for detecting shadow of object
CN104424507B (en) * 2013-08-28 2020-03-03 杨凤琴 Prediction method and prediction device of echo state network
CN106845705A (en) * 2017-01-19 2017-06-13 国网山东省电力公司青岛供电公司 The Echo State Networks load forecasting model of subway power supply load prediction system
CN108734264A (en) * 2017-04-21 2018-11-02 展讯通信(上海)有限公司 Deep neural network model compression method and device, storage medium, terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236606A (en) * 2008-03-07 2008-08-06 北京中星微电子有限公司 Shadow cancelling method and system in vision frequency monitoring
CN101635852A (en) * 2009-08-26 2010-01-27 北京航空航天大学 Method for detecting real-time moving object based on adaptive background modeling
CN102509101A (en) * 2011-11-30 2012-06-20 昆山市工业技术研究院有限责任公司 Background updating method and vehicle target extracting method in traffic video monitoring
CN103208126A (en) * 2013-04-17 2013-07-17 同济大学 Method for monitoring moving object in natural environment
CN104318263A (en) * 2014-09-24 2015-01-28 南京邮电大学 Real-time high-precision people stream counting method
CN104657712A (en) * 2015-02-09 2015-05-27 惠州学院 Method for detecting masked person in monitoring video
CN107944392A (en) * 2017-11-25 2018-04-20 周晓风 A kind of effective ways suitable for cell bayonet Dense crowd monitor video target mark
CN108154518A (en) * 2017-12-11 2018-06-12 广州华多网络科技有限公司 A kind of method, apparatus of image procossing, storage medium and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于背景更新的目标检测与消影研究与应用;魏岩;《中国优秀硕士学位论文全文数据库 信息科技辑》;20131215(第12期);第I-II页摘要,正文第1页第1节,第3页第1节,第7-8、11、15-17页第2节,第28-29页第3节,第34-35页第4节,第37页第4节,第46-47页第5节,图3-8(d~f),图5-4 *
视频运动目标检测若干关键算法研究;刘亚林;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120715(第07期);第I138-2081页 *

Also Published As

Publication number Publication date
CN109726691A (en) 2019-05-07
CN112232265A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
WO2021017606A1 (en) Video processing method and apparatus, and electronic device and storage medium
CN105518744A (en) Pedestrian re-identification method and equipment
CN106951870B (en) Intelligent detection and early warning method for active visual attention of significant events of surveillance video
CN110807385A (en) Target detection method and device, electronic equipment and storage medium
CN111813997B (en) Intrusion analysis method, device, equipment and storage medium
CN109145766A (en) Model training method, device, recognition methods, electronic equipment and storage medium
CN110929593A (en) Real-time significance pedestrian detection method based on detail distinguishing and distinguishing
CN111383244B (en) Target detection tracking method
CN109740527B (en) Image processing method in video frame
CN112489143A (en) Color identification method, device, equipment and storage medium
CN111723773B (en) Method and device for detecting carryover, electronic equipment and readable storage medium
CN110516707B (en) Image labeling method and device and storage medium thereof
CN113065379B (en) Image detection method and device integrating image quality and electronic equipment
CN114092576A (en) Image processing method, device, equipment and storage medium
CN112734747B (en) Target detection method and device, electronic equipment and storage medium
CN111582654B (en) Service quality evaluation method and device based on deep cycle neural network
CN109727218B (en) Complete graph extraction method
CN112883827A (en) Method and device for identifying designated target in image, electronic equipment and storage medium
CN113570615A (en) Image processing method based on deep learning, electronic equipment and storage medium
US20230386185A1 (en) Statistical model-based false detection removal algorithm from images
CN116912774A (en) Infrared image target identification method, electronic device and storage medium of power transmission and transformation equipment based on edge calculation
CN109726691B (en) Monitoring method
CN112488985A (en) Image quality determination method, device and equipment
CN112819859B (en) Multi-target tracking method and device applied to intelligent security
CN114549809A (en) Gesture recognition method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201113

Address after: 246000, room 1, building 80, No. 211, Tianzhu mountain road, Anqing Development Zone, Anhui, China

Applicant after: Anhui rungu Technology Co.,Ltd.

Address before: 310000 Room 209, 2nd Floor, Building 1180 Binan Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU MINGZHIYUN EDUCATION TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant