CN112232265A - High-accuracy monitoring method - Google Patents

High-accuracy monitoring method Download PDF

Info

Publication number
CN112232265A
CN112232265A CN202011170798.6A CN202011170798A CN112232265A CN 112232265 A CN112232265 A CN 112232265A CN 202011170798 A CN202011170798 A CN 202011170798A CN 112232265 A CN112232265 A CN 112232265A
Authority
CN
China
Prior art keywords
neural network
foreground image
neurons
neuron
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011170798.6A
Other languages
Chinese (zh)
Inventor
金涛
江浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Mingzhiyun Education Technology Co ltd
Original Assignee
Hangzhou Mingzhiyun Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Mingzhiyun Education Technology Co ltd filed Critical Hangzhou Mingzhiyun Education Technology Co ltd
Priority to CN202011170798.6A priority Critical patent/CN112232265A/en
Publication of CN112232265A publication Critical patent/CN112232265A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a monitoring method with high accuracy, which comprises the following steps: acquiring a shot video frame sequence; extracting a foreground image of a current video according to the video frame sequence based on a pre-trained neural network model; removing the shadow in the foreground image according to a preset shadow removing method; judging whether an abnormal hole exists in the foreground image, if so, filling the abnormal hole according to a preset abnormal hole filling method; and acquiring a circumscribed rectangle of the foreground image, and highlighting the circumscribed rectangle. The method can quickly and accurately obtain the accurate shape of the non-background image in the video picture, thereby realizing the functions of suspicion comparison, key picture display, picture center display and the like. The method has high intelligence, high accuracy of the obtained graph and wide application prospect.

Description

High-accuracy monitoring method
Technical Field
The invention relates to the field of monitoring, in particular to a monitoring method with high accuracy.
Background
The intelligent monitoring system adopts image processing, mode recognition and computer vision technology, filters useless or interference information of video pictures by means of powerful data processing capacity of a computer through an intelligent video analysis module added in the monitoring system, automatically recognizes different objects, analyzes and extracts key useful information in a video source, quickly and accurately positions accident sites, judges abnormal conditions in the monitored pictures, sends out alarms or triggers other actions in a fastest and optimal mode, and therefore, the intelligent monitoring system effectively performs pre-warning, in-situ processing and timely evidence obtaining after the accident, is full-automatic, all-weather and real-time monitoring.
The intelligent monitoring in the prior art has been widely applied, but the related technologies of extracting images from flowing video frames and automatically processing the extracted images are not mature, so that the complete automation is difficult to realize and the identification by human eyes is required. In view of the fact that the related technology for automatically processing the image is not mature, the false alarm rate of automatically alarming based on the image processing technology is high, and the accurate outline and the accurate position of the suspected target are difficult to obtain.
Disclosure of Invention
In order to solve the technical problem, the invention provides a monitoring method with high accuracy. The invention is realized by the following technical scheme:
a high accuracy monitoring method comprising:
acquiring a shot video frame sequence;
extracting a foreground image of a current video according to the video frame sequence based on a pre-trained neural network model;
removing the shadow in the foreground image according to a preset shadow removing method;
judging whether an abnormal hole exists in the foreground image, if so, filling the abnormal hole according to a preset abnormal hole filling method;
and acquiring a circumscribed rectangle of the foreground image, and highlighting the circumscribed rectangle.
Further, the foreground image can be compared with pictures in a preset suspicion picture library to obtain picture similarity, and if the picture similarity exceeds a preset threshold value, an alarm prompt is given.
Further, the simple method for acquiring the circumscribed rectangle is as follows:
extracting a first endpoint and a second endpoint of the foreground image on a horizontal axis;
extracting a third endpoint and a fourth endpoint of the foreground image on a longitudinal axis;
making a first tangent of the first endpoint in the foreground image; a second tangent to the foreground image at the second endpoint; a third tangent to the foreground image at the third endpoint; a fourth tangent to the foreground image at the fourth endpoint;
the first tangent line, the second tangent line, the third tangent line and the fourth tangent line are intersected to form a circumscribed rectangle.
Further, the circumscribed rectangle is a minimum circumscribed rectangle; the center of the minimum bounding rectangle is highlighted.
Further, the removing the shadow in the foreground image according to a preset shadow removing method includes:
acquiring a preset multidirectional mapping table and a multidirectional mapping map set, wherein the multidirectional mapping table records the corresponding relation among an illumination time period, an illumination intensity period, resolution and a characteristic threshold; a plurality of background maps are recorded in the multidirectional mapping map set, the feature set of each background map is different, and the feature set comprises the illumination time period, the illumination intensity period and the resolution of the background map;
selecting a target background image from the multidirectional mapping image set according to the current illumination time period, the illumination intensity period and the shooting equipment;
selecting a target characteristic threshold value from the multidirectional mapping table according to the current illumination time period, the illumination intensity period and the shooting equipment;
and removing shadows in the foreground image according to the target background image and the target feature threshold.
Further, the removing the shadow in the foreground image according to the target background map and the target feature threshold comprises:
obtaining the brightness angle difference of each pixel according to the target background image and the foreground image;
and determining the pixel points with the brightness angle difference smaller than the target characteristic threshold value as shadow areas and removing the shadow areas.
The embodiment of the invention provides a monitoring method with high accuracy in detail, and the accurate shape of a non-background image in a video picture can be quickly and accurately obtained, so that functions of suspicion comparison, key picture display, picture center display and the like can be realized. The method has high intelligence, high accuracy of the obtained graph and wide application prospect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a monitoring method with high accuracy according to an embodiment of the present invention;
FIG. 2 is a flow chart of a neural network construction method provided by an embodiment of the present invention;
FIG. 3 is a flow chart of a neural network generation parameter generating neural network method provided by an embodiment of the present invention;
fig. 4 is a flowchart of a method for generating a neural network by using the basic neurons as a clustering center according to a preset generation rule according to an embodiment of the present invention;
FIG. 5 is a flow chart of calculating a state transition matrix of the neural network according to an embodiment of the present invention;
FIG. 6 is a flowchart of a method for removing shadows according to an embodiment of the present invention;
fig. 7 is a flowchart of a method for determining an abnormal hole according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The invention provides a monitoring method with high accuracy, as shown in fig. 1, the method comprises the following steps:
s101, acquiring a video frame sequence shot by shooting equipment; the shooting equipment is fixed at a specific position, and the shooting angle of the shooting equipment does not change along with time.
Specifically, in the embodiment of the present invention, the video frame sequence may be obtained from existing shooting equipment such as a dome camera and a gun camera, where the shooting equipment is fixed at a specific position and the shooting angle of the shooting equipment does not change with time. Specifically, the sequence of video frames is a sequence of color vectors, which are vectors of an RGB vector space.
And S102, extracting a foreground image of the current video according to the video frame sequence based on a pre-trained neural network model.
S103, removing the shadow in the foreground image according to a preset shadow removing method.
And S104, judging abnormal holes in the foreground images caused by the foreground image extraction step, and if so, filling the abnormal holes according to a preset abnormal hole filling method.
Specifically, the foreground image extracting step is step S102. Due to the foreground image extraction, holes may be generated in some scenes, and even if the foreground extraction method is improved, the holes are difficult to completely eliminate. The hole is different from the hole of the object in the foreground image, and is an abnormal hole which is required to be filled; accordingly, the void of the object does not need to be filled, and how to distinguish the abnormal void is a problem to be solved.
In particular, the abnormal hole filling method may use the related art.
And S105, acquiring the external rectangle of the foreground image, and highlighting the external rectangle.
In a possible embodiment, the simple method for obtaining the circumscribed rectangle is as follows:
extracting a first endpoint and a second endpoint of the foreground image on a horizontal axis;
extracting a third endpoint and a fourth endpoint of the foreground image on a longitudinal axis;
making a first tangent of the first endpoint in the foreground image; a second tangent to the foreground image at the second endpoint; a third tangent to the foreground image at the third endpoint; a fourth tangent to the foreground image at the fourth endpoint;
the first tangent line, the second tangent line, the third tangent line and the fourth tangent line are intersected to form a circumscribed rectangle.
In other possible embodiments, the circumscribed rectangle may also be the smallest circumscribed rectangle. The center of the minimum bounding rectangle is highlighted.
Further, the foreground image can be compared with pictures in a preset suspicion picture library to obtain picture similarity, and if the picture similarity exceeds a preset threshold value, an alarm prompt is given.
The foreground image extraction process is influenced by various complex factors such as illumination, background disturbance and the like, therefore, the embodiment of the invention preferably adopts the neural network to extract the foreground image from the video frame sequence, and improves the robustness of the extraction process on the basis of big data training. The training process of the neural network is largely the same as or different from that of the prior art, and therefore, the embodiment of the present invention is not described in detail. Different neural networks behave differently during machine learning, and in order to accommodate the specific needs of embodiments of the present invention, embodiments of the present invention preferably provide a specific neural network. The neural network constructed by the embodiment of the invention has the following characteristics:
the neural network satisfies the following formula x (n +1) ═ W1u(n+1)+W2x(n)+W3y (n); wherein x and y are input and output, respectively, W1,W2,W3The conversion matrixes from the current input, the current state and the current output of the neural network to the next state of the neural network are respectively.
In particular, W1,W2,W3Is not changed by the learning process of the neural network, and W1,W3Are all equal to W2It is related. In fact, W of the neural network1,W2,W3Three self-parameter matrix contents are correlated to determine a state transformation matrix W2A uniquely determined neural network is obtained, and W does not need to be known in the actual learning and using process of the neural network1,W3The actual value of (c). State transition matrix W2To characterize the internal parameters of the neural network construction. The relationship between the neural network input and output is uniquely determined by the input and output matrix, which is obtained by training.
The neural network belongs to a neural network generated by performing biological simulation on human brain, so that the dynamic characteristics are stronger, the coupling degree between neurons is lower, and more intelligent expression can be realized in the machine learning process.
In order to obtain such a neural network, an embodiment of the present invention provides a preferred construction method, as shown in fig. 2, including:
s1, obtaining neural network generation parameters, wherein the generation parameters comprise neuron clustering numbers, neuron density degree parameters, distribution space size parameters and neuron total numbers.
Specifically, the neuron clustering number, the neuron density parameter, the distribution space size parameter and the total number of neurons belong to known parameters, and the specific content depends on the requirements of the user.
And S2, generating a neural network according to the neural network generation parameters.
S3, calculating a state transformation matrix W of the neural network2And the state transformation matrix is used for acquiring the internal state of the neural network at the next moment according to the current internal state of the neural network.
After the neural network is successfully constructed, the neural network should be further trained, and an input/output mapping matrix is obtained in the training process, wherein the input/output mapping matrix can determine output according to input only, and a specific training method can refer to the prior art. The input and output of the neural network have a unique determined relationship Y ═ WoutX, only the input and output mapping matrix W is determined by using a neural network training method in the prior artoutAnd (4) finishing.
The generating a neural network according to the neural network generation parameter, as shown in fig. 3, includes:
and S21, obtaining the basic neurons according to the neuron clustering number.
And S22, generating a neural network by taking the basic neurons as a clustering center according to a preset generation rule, wherein the number of the neurons in the neural network is the same as the total number of the neurons, each neuron in the neural network is bidirectionally interconnected with the adjacent neurons, and each neuron in the neural network is connected with the neuron per se according to a preset probability.
Wherein, what each neuron connects with itself with the preset probability in the neural network contains meaning as: the ratio of the number of the neurons with self-feedback connection in the neural network to the total number of the neurons is a preset probability.
And S23, setting an input node and an output node which are connected with the neural network.
Further, in order to facilitate the generation of the neural network, the embodiment of the present invention may first generate a layout diagram of the neural network on the smart device, and represent the interconnection relationship of each neuron of the neural network by using the layout diagram. Therefore, the embodiment of the invention further discloses a method for obtaining the basic neurons according to the neuron clustering numbers from the angle of the layout, which comprises the following steps: acquiring a left upper corner boundary A and a right lower corner boundary B of the rectangular layout; connecting the upper left corner boundary A and the lower right corner boundary B to obtain an oblique diagonal line; and dividing the diagonal line into N equal parts, wherein N is the neuron clustering number, and the equal division points are the basic neurons.
Further, the generating a neural network by using the basic neurons as a clustering center according to a preset generation rule is shown in fig. 4, and includes:
s221, generating new neurons randomly in the rectangular layout, and enabling the new neurons p to be newnewActive and its surrounding pre-existing neurons piAccording to the probability P (new, i) ═ k e-μd(new,i)And connecting, wherein kappa and mu are a neuron density parameter and a distribution space size parameter respectively, and d (new, i) is the Euclidean distance between the newly added neuron and the existing neuron.
S222. simultaneously, existing neurons p around the neurons piAccording to the probability P (new, i) ═ k e-μd(new,i)Active and newly added neurons pnewAnd (4) connecting.
S223, judging the newly added neuron pnewWhether or not to contact at least one of the existing neurons piGenerating bidirectional interconnection, if so, reserving the newly added neuron, wherein the newly added neuron becomes an existing neuron;and if not, deleting the newly added neurons.
In the process of constructing the bidirectionally interconnected neurons, the connection probability of the newly added neurons and the neurons nearby the newly added neurons is inversely related to the distance, so that a neural network with a large number of neurons close to the basic neurons and a small number of neurons far from the basic neurons can be formed.
The calculating of the state transformation matrix W of the neural network2As shown in fig. 5, includes:
s231, selecting the basic neurons close to the center of the rectangular layout as reference points, and calculating the distances between other neurons and the reference points.
S232, arranging the neurons according to an ascending order, wherein the positions of the neurons in the ordering result are the state transformation matrix W of the neurons2The number in (1).
And S233, setting cluster center numbers for each basic neuron, and determining the cluster numbers of the neurons.
In particular, it can be according to formula Ci=argmin(d(Ni,Zc) To obtain the number of the cluster to which each neuron belongs, where CiIdentifying neurons NiNumber of cluster to which it belongs, ZcCoordinates of the basic neurons of the cluster numbered c, d (N)i,Zc) Is neuron NiAnd basal neuron ZcIs measured in the same manner as described above.
S234, calculating the connection strength among the neurons with the interconnection relationship, and obtaining a state transformation matrix W according to the connection strength2
In particular, the state transition matrix W2The calculation method comprises the following steps:
s2341, calculating any two neurons Ni,NjThe interrelationship between them.
In particular, if the two neurons Ni,NjIf the coordinates are the same, the mutual relationship is a class relationship; if the two neurons Ni,NjIf the coordinates are different but belong to the same cluster, the mutual relationship is a two-class relationship, otherwise, the relationship isThree types of relationships.
S2342, obtaining the two neurons N according to the correlationi,NjCorrelated state transition matrix W2Value w of element(s)ij
Obtaining a connection strength parameter change interval alpha E < -t corresponding to a type of relation1,t1]The variation interval beta E-t of the connection strength parameter corresponding to the second type of relation2,t2]And the change interval gamma E-t of the connection strength parameter corresponding to the three types of relations3,t3];
The element values are determined from the interrelationships.
In particular, the amount of the solvent to be used,
Figure BDA0002747224490000111
wherein
Figure BDA0002747224490000112
Specifically, the setting of α is related to the coupling degree of the neuron clusters, and can be adjusted according to actual needs, and the setting of β and γ is related to the stability of the neural network, and also needs to be adjusted according to actual needs.
Further, on the basis of obtaining the foreground image, an embodiment of the present invention further discloses a method for removing a shadow, as shown in fig. 6, including:
s1031, acquiring a preset multidirectional mapping table and a multidirectional mapping atlas, wherein the multidirectional mapping table records the corresponding relation among the illumination time period, the illumination intensity period, the resolution and the characteristic threshold; a plurality of background maps are recorded in the multidirectional mapping map set, the feature set of each background map is different, and the feature set comprises the illumination time period, the illumination intensity period and the resolution of the background map.
The inventor of the embodiment of the invention finds that a certain characteristic of a shadow area and a non-shadow area in a foreground image has a jump change in the process of studying shadow and optical expression, and defines the characteristic value as a brightness angle difference in the embodiment of the invention
Figure RE-GDA0002822074550000121
Wherein
Figure RE-GDA0002822074550000122
Respectively, the color vector of a certain pixel in its corresponding background image and the color vector of the pixel in the current foreground image. Specifically, the background map is related to an illumination time period, an illumination intensity period and resolution, and the threshold value of the brightness angle difference as the shadow and non-shadow distinguishing feature is also related to the illumination time period, the illumination intensity period and the resolution.
In order to perform shading processing based on this finding, in the embodiment of the present invention, a multi-directional map set corresponding to the shooting position of the shooting device is obtained in advance. The multi-directional mapping atlas records images obtained under scenes of different illumination time periods, different illumination intensity periods and different resolutions under the condition of no pedestrian, and takes the images as background images.
Further, based on the statistical result, the embodiment of the present invention obtains a multi-directional mapping table in advance, where the multi-directional mapping table is used to query a feature threshold according to an illumination time period, an illumination intensity period, and a resolution, and the feature threshold is used to distinguish shadow pixels from non-shadow pixels in the foreground.
S1032, selecting a target background image from the multidirectional mapping image set according to the current illumination time period, the illumination intensity period and the shooting equipment.
S1033, selecting a target feature threshold from the multidirectional mapping table according to the current illumination time period, the illumination intensity period and the shooting equipment.
S1034, removing the shadow in the foreground image according to the target background image and the target feature threshold value.
Specifically, the removing the shadow in the foreground image according to the target background image and the target feature threshold includes:
s10341, obtaining the brightness angle difference of each pixel according to the target background image and the foreground image.
S10342, determining the pixel points with the brightness angle difference smaller than the target characteristic threshold as shadow areas and removing the shadow areas.
Before filling the abnormal hole, preferably, an embodiment of the present invention further provides a method for determining an abnormal hole, as shown in fig. 7, including:
and S10, acquiring a pixel difference value L (x, y) between a pixel I (x, y) in the foreground image at the current moment and a corresponding target background image B (x, y) as I (x, y) -B (x, y).
S20, obtaining an abnormal cavity judgment threshold T (x, y) at the current moment.
And S30, if the pixel difference value is larger than the judgment threshold value, judging that the pixel belongs to an abnormal cavity.
In fact, if a pedestrian is shot in the video, the pedestrian has a high probability of moving, and the motion and the stillness of the pedestrian have a great influence on the accuracy of the determination of the abnormal hole, so that the target background image B (x, y) and the abnormal hole determination threshold T (x, y) are both time-dependent in a preferred embodiment of the present invention, and specifically, the time-dependent relationship is:
Figure BDA0002747224490000131
where γ is a constant that does not change with time, and may be set based on experience.
Further, before obtaining the pixel difference value, the method further includes:
and acquiring a preset multidirectional parameter set, wherein the multidirectional parameter set records the corresponding relation among the illumination time period, the illumination intensity period, the resolution and the abnormal cavity judgment basic threshold.
Specifically, Bt(x,y),TtAnd the initial values of (x, y) are respectively a target background image selected from the multidirectional mapping image set according to the current illumination time period, the illumination intensity period and the shooting equipment, and an abnormal cavity judgment basic threshold value selected from the multidirectional parameter set according to the current illumination time period, the illumination intensity period and the shooting equipment.
According to the relationship between the target background image B (x, y) and the abnormal hole judgment threshold T (x, y) and time, the content of the object corresponding to the foreground image is not changed when the object moves, and the content of the object corresponding to the foreground image is updated when the object is static. Of course, if a change in the illumination period, illumination intensity period, or photographed device occurs during the update thereof, the value thereof will be reinitialized.
The embodiment of the invention provides a monitoring method with high accuracy in detail, and a detailed technical scheme for filling abnormal holes in the monitoring method, so that the accurate shape of a non-background image in a video picture can be quickly and accurately obtained, and the functions of suspect comparison, highlight picture display, picture center display and the like can be realized. The method has high intelligence, high accuracy of the acquired graph and wide application prospect.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, indicating that there may be three relationships, e.g., a and/or B, which may indicate: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (2)

1. A high accuracy monitoring method, comprising:
acquiring a shot video frame sequence;
extracting a foreground image of a current video according to the video frame sequence based on a pre-trained neural network model;
removing the shadow in the foreground image according to a preset shadow removing method;
judging whether an abnormal hole exists in the foreground image, if so, filling the abnormal hole according to a preset abnormal hole filling method;
acquiring a circumscribed rectangle of the foreground image, and highlighting the circumscribed rectangle;
the neural network construction method comprises the following steps:
acquiring neural network generation parameters, wherein the generation parameters comprise neuron clustering number, neuron density parameter, distribution space size parameter and neuron total number;
generating a neural network according to the neural network generation parameters;
calculating a state transformation matrix of the neural network, wherein the state transformation matrix is used for acquiring the next internal state of the neural network according to the current internal state of the neural network;
the generating a neural network according to the neural network generation parameters includes:
obtaining basic neurons according to the neuron clustering number;
generating a neural network by taking the basic neurons as a clustering center according to a preset generation rule, wherein the number of the neurons in the neural network is the same as the total number of the neurons, each neuron in the neural network is bidirectionally interconnected with the adjacent neurons, and each neuron in the neural network is connected with the neuron per se according to a preset probability;
wherein, the meaning that each neuron in the neural network is connected with the neuron by a preset probability is as follows: the ratio of the number of the neurons with self-feedback connection in the neural network to the total number of the neurons is a preset probability;
setting an input node and an output node connected with the neural network;
the generating a neural network by taking the basic neurons as clustering centers according to a preset generating rule comprises the following steps: randomly generating new neurons in the rectangular layout chart, and connecting the new neurons with the new neuronsThe newly increased neuron pnewActive and its surrounding pre-existing neurons piAccording to the probability P (new, i) ═ k e-μd(new,i)Connecting, wherein kappa and mu are a neuron density parameter and a distribution space size parameter respectively, and d (new, i) is the Euclidean distance between the newly added neuron and the existing neuron;
while the existing neurons p around itiAccording to the probability P (new, i) ═ k e-μd(new,i)Active and newly added neurons pnewConnecting;
judging the newly added neuron pnewWhether or not to contact at least one of the existing neurons piGenerating bidirectional interconnection, if so, reserving the newly added neuron, wherein the newly added neuron becomes an existing neuron; if not, deleting the newly added neurons; comparing the foreground image with pictures in a preset suspicion picture library to obtain picture similarity, and if the picture similarity exceeds a preset threshold value, giving an alarm prompt; the simple acquisition method of the circumscribed rectangle comprises the following steps:
extracting a first endpoint and a second endpoint of the foreground image on a horizontal axis;
extracting a third endpoint and a fourth endpoint of the foreground image on a longitudinal axis;
making a first tangent of the first endpoint in the foreground image; a second tangent to the foreground image at the second endpoint; a third tangent to the foreground image at the third endpoint; a fourth tangent to the foreground image at the fourth endpoint;
the first tangent line, the second tangent line, the third tangent line and the fourth tangent line are intersected to form a circumscribed rectangle; the external rectangle is the minimum external rectangle; the center of the minimum circumscribed rectangle is highlighted; the removing the shadow in the foreground image according to a preset shadow removing method comprises the following steps:
acquiring a preset multidirectional mapping table and a multidirectional mapping map set, wherein the multidirectional mapping table records the corresponding relation among an illumination time period, an illumination intensity period, resolution and a characteristic threshold; a plurality of background maps are recorded in the multidirectional mapping map set, the feature set of each background map is different, and the feature set comprises the illumination time period, the illumination intensity period and the resolution of the background map;
selecting a target background image from the multidirectional mapping image set according to the current illumination time period, the illumination intensity period and the shooting equipment;
selecting a target characteristic threshold value from the multidirectional mapping table according to the current illumination time period, the illumination intensity period and the shooting equipment;
and removing shadows in the foreground image according to the target background image and the target feature threshold.
2. The method of claim 1, wherein:
the removing the shadow in the foreground image according to the target background map and the target feature threshold comprises: obtaining a brightness angle difference of each pixel according to the target background image and the foreground image, wherein the brightness angle difference is defined as
Figure FDA0002747224480000031
Wherein
Figure FDA0002747224480000032
Respectively representing the color vector of a certain pixel in a corresponding background image and the color vector of the pixel in a current foreground image; and determining the pixel points with the brightness angle difference smaller than the target characteristic threshold value as shadow areas and removing the shadow areas.
CN202011170798.6A 2018-12-30 2018-12-30 High-accuracy monitoring method Withdrawn CN112232265A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011170798.6A CN112232265A (en) 2018-12-30 2018-12-30 High-accuracy monitoring method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011170798.6A CN112232265A (en) 2018-12-30 2018-12-30 High-accuracy monitoring method
CN201811648286.9A CN109726691B (en) 2018-12-30 2018-12-30 Monitoring method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811648286.9A Division CN109726691B (en) 2018-12-30 2018-12-30 Monitoring method

Publications (1)

Publication Number Publication Date
CN112232265A true CN112232265A (en) 2021-01-15

Family

ID=66299467

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201811648286.9A Active CN109726691B (en) 2018-12-30 2018-12-30 Monitoring method
CN202011170798.6A Withdrawn CN112232265A (en) 2018-12-30 2018-12-30 High-accuracy monitoring method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201811648286.9A Active CN109726691B (en) 2018-12-30 2018-12-30 Monitoring method

Country Status (1)

Country Link
CN (2) CN109726691B (en)

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236606B (en) * 2008-03-07 2010-12-08 北京中星微电子有限公司 Shadow cancelling method and system in vision frequency monitoring
TWI368185B (en) * 2008-11-06 2012-07-11 Ind Tech Res Inst Method for detecting shadow of object
CN101635852B (en) * 2009-08-26 2011-08-31 北京航空航天大学 Method for detecting real-time moving object based on adaptive background modeling
CN102509101B (en) * 2011-11-30 2013-06-26 昆山市工业技术研究院有限责任公司 Background updating method and vehicle target extracting method in traffic video monitoring
CN103208126B (en) * 2013-04-17 2016-04-06 同济大学 Moving object monitoring method under a kind of physical environment
CN104424507B (en) * 2013-08-28 2020-03-03 杨凤琴 Prediction method and prediction device of echo state network
CN104318263A (en) * 2014-09-24 2015-01-28 南京邮电大学 Real-time high-precision people stream counting method
CN104657712B (en) * 2015-02-09 2017-11-14 惠州学院 Masked man's detection method in a kind of monitor video
CN106845705A (en) * 2017-01-19 2017-06-13 国网山东省电力公司青岛供电公司 The Echo State Networks load forecasting model of subway power supply load prediction system
CN108734264A (en) * 2017-04-21 2018-11-02 展讯通信(上海)有限公司 Deep neural network model compression method and device, storage medium, terminal
CN107944392A (en) * 2017-11-25 2018-04-20 周晓风 A kind of effective ways suitable for cell bayonet Dense crowd monitor video target mark
CN108154518B (en) * 2017-12-11 2020-09-08 广州华多网络科技有限公司 Image processing method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN109726691B (en) 2020-12-04
CN109726691A (en) 2019-05-07

Similar Documents

Publication Publication Date Title
US20210042556A1 (en) Pixel-level based micro-feature extraction
CN109859245B (en) Multi-target tracking method and device for video target and storage medium
CN110210276A (en) A kind of motion track acquisition methods and its equipment, storage medium, terminal
CN110807385A (en) Target detection method and device, electronic equipment and storage medium
CN105518744A (en) Pedestrian re-identification method and equipment
CN106951870B (en) Intelligent detection and early warning method for active visual attention of significant events of surveillance video
CN111813997B (en) Intrusion analysis method, device, equipment and storage medium
CN109740527B (en) Image processing method in video frame
CN110929593A (en) Real-time significance pedestrian detection method based on detail distinguishing and distinguishing
CN111723773B (en) Method and device for detecting carryover, electronic equipment and readable storage medium
CN111383244B (en) Target detection tracking method
CN112489143A (en) Color identification method, device, equipment and storage medium
CN110516707B (en) Image labeling method and device and storage medium thereof
CN109961425A (en) A kind of water quality recognition methods of Dynamic Water
CN111582654B (en) Service quality evaluation method and device based on deep cycle neural network
CN114092576A (en) Image processing method, device, equipment and storage medium
CN112734747A (en) Target detection method and device, electronic equipment and storage medium
CN113065379B (en) Image detection method and device integrating image quality and electronic equipment
CN112883827A (en) Method and device for identifying designated target in image, electronic equipment and storage medium
CN109726691B (en) Monitoring method
CN109727218B (en) Complete graph extraction method
CN112488985A (en) Image quality determination method, device and equipment
CN113139540B (en) Backboard detection method and equipment
CN106446764B (en) Video object detection method based on improved fuzzy color aggregated vector
CN112819859B (en) Multi-target tracking method and device applied to intelligent security

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210115