CN107145821A - A kind of crowd density detection method and system based on deep learning - Google Patents
A kind of crowd density detection method and system based on deep learning Download PDFInfo
- Publication number
- CN107145821A CN107145821A CN201710177154.1A CN201710177154A CN107145821A CN 107145821 A CN107145821 A CN 107145821A CN 201710177154 A CN201710177154 A CN 201710177154A CN 107145821 A CN107145821 A CN 107145821A
- Authority
- CN
- China
- Prior art keywords
- crowd
- image
- density
- field picture
- grade
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
- G06T7/45—Analysis of texture based on statistical description of texture using co-occurrence matrix computation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of crowd density detection method and system based on deep learning, detection method step is as follows:Background image information is obtained by Background learning first, the target prospect image of each two field picture is then extracted by background image information.Low density crowd model is set up by the image for having extracted target prospect image and belong to low density crowd grade, Dense crowd model is set up by the image for having extracted target prospect image and belong to Dense crowd grade;Each two field picture for needing detection crowd density, it is first applied to low density crowd model, when low density crowd model gets crowd's quantity result not less than certain value, crowd density grade is then judged according to crowd's quantity, when low density crowd model gets crowd's quantity result more than certain value, image is then inputted into high density people's group model, crowd density is judged by Dense crowd model.High, the small advantage of amount of calculation with accuracy of detection.
Description
Technical field
The invention belongs to field of machine vision, more particularly to a kind of crowd density detection method based on deep learning and it is
System.
Background technology
With China's rapid development of economy, Population Urbanization is increasingly apparent.Increasing people pours in city, causes city
The density of population of many public arenas in city (including subway, airport, shopping centre, stadium etc.) constantly increases.Especially public
Period festivals or holidays, crowded phenomenon is of common occurrence.Crowd is as a special management object, increasingly by the weight of society
Depending on.Therefore crowd how is monitored effectively in real time, and the overcrowding potential safety hazard brought of elimination crowd is that society urgently solves now
One of certainly the problem of.Subway is as the part in City Rail Transit System, and the demand of crowd density detection is more eager.
Whether conventional method is judged crowded in scene using the statistics of number.But because monitoring scene area is different, merely
By personnel turnover or mobile phone signal transmission statistic number exist a large amount of labor intensive financial resources and produce error it is larger the problem of.
And in subway different locations area it is different, the intensive journey of crowd in scene can not be only accurately judged to by statistical number of person
Degree, and for handling the emergency situations of public arena, degree that the crowd is dense is even more important, demographics are only carried as assistance data
For.
The research to crowd density can be divided into two classes at present, be the method based on pixel and the side based on texture analysis respectively
Method.Method based on pixel is in article " population surveillance based on image procossing " (Crowd earliest by Davies
Monitoring using image processing, Electronics&Communication Engineering
Journal, 1995,7 (1):Proposed in 37-47), by background extracting crowd's prospect, prospect side is extracted with edge detection method
Edge number of pixels, crowd's quantity survey linear model is fitted according to the number of demarcation, and the foreground edge pixel count of extraction is inputted
Estimation model can obtain corresponding crowd's quantity.Due to the influence of perspective distortion effect, crowd's foreground pixel and edge pixel number
Mesh produces near big and far smaller phenomenon with its true distance of point away from video camera.Method based on pixel has when crowd density is smaller
There is good effect, as crowd density increases, because mutually being blocked between pedestrian so that the linear relationship of such method is no longer set up.
1998, Marana proposed a kind of crowd density estimation method based on texture analysis;The foundation of this method is different
The corresponding texture pattern of crowd's image of density is different.Highdensity crowd shows as thin pattern on texture, and low-density
Crowd's image shows as roughcast formula while background image is low frequency on texture.Density estimation method based on texture analysis
Dense crowd density issue can be solved, but algorithm amount of calculation is larger, characteristic quantity is more, and when background is more complicated,
The error of centering low density crowd estimation is larger.Hereafter, it is close come the crowd of improving with regard to different texture analysis method how is used in combination
Degree estimation accuracy rate becomes study hotspot.And in the prior art in crowd density detection method in image processing process institute
The background image used is generally by calculating accessed by the average value of each pixel, the multimode of ambient lighting change and background
State property is more sensitive, with the change of environment, and its adaptability will be deteriorated, and influence whether the accuracy of detection of crowd density.
It is typically that the image got is passed through into network in addition, being directed to the system of crowd density detection in the prior art
Remote control center is sent to, is detected after being analyzed by remote control center image, this kind of system communication needs
Take larger bandwidth and carry out image transmitting, with image processing process is slow and the defect such as poor real.
The content of the invention
The first object of the present invention be to overcome the shortcoming and deficiency of prior art there is provided a kind of accuracy of detection it is high, calculate
The small crowd density detection method based on deep learning of amount.
The second object of the present invention is by a kind of crowd density based on deep learning for being used to realize the above method
Detecting system.
The first object of the present invention is achieved through the following technical solutions:A kind of crowd density detection side based on deep learning
Method, step is as follows:
S1, obtain every two field picture in real time by camera, then some two field pictures before taking out enter to this some two field picture
Row Background learning, obtains background image information;
S2, each two field picture for after, according to the background image information got in step S1, using background subtraction
Extract the target prospect image in each two field picture;
S3, the image for having extracted target prospect image in multiframe step S2 and having belonged to low density crowd grade is selected,
Crowd's quantity, the number according to target prospect image pixel in each two field picture of above-mentioned selection are demarcated to each two field picture selected
Relation fitting between crowd's quantity obtains the first low density crowd model, or according to target in each two field picture of above-mentioned selection
Relation fitting between the edge pixel number and crowd's quantity of foreground image obtains the second low density crowd model;Choose simultaneously
Go out and extracted target prospect image in multiframe step S2 and belonged to the image of each grade in Dense crowd grade as instruction
Practice sample, the textural characteristics of each training sample target prospect image are extracted using gray level co-occurrence matrixes, by each training sample target
The textural characteristics of foreground image are inputted to BP neural network, and BP neural network is trained, and obtain Dense crowd model;
S4, each two field picture for being directed to detection crowd density the need for being got in step S2, by the target prospect of image
The number of image pixel is inputted to the first low density crowd model, gets crowd's quantity, then judges the crowd's number got
Whether amount exceedes certain value F, if it is not, then crowd density grade is determined according to the above-mentioned crowd's quantity got, if so, then entering
Enter step S5;
Or be directed to got in step S2 the need for detect crowd density each two field picture, by image object foreground picture
The number of the edge pixel of picture is inputted to the second low density crowd model, gets crowd's quantity, then judges the people got
Whether group's quantity exceedes certain value F, if it is not, then determining crowd density grade according to the above-mentioned crowd's quantity got;If so,
Then enter step S5;
S5, using gray level co-occurrence matrixes extract image target prospect image textural characteristics, by the textural characteristics of extraction
Input in high density people's group model, crowd density grade is got by the output of Dense crowd model.
It is preferred that, the process of Background learning is as follows in the step S1:
S11, for the first two field picture in some two field pictures before taking-up gray level image is first converted into, and according to this
Each pixel of frame gray level image sets up initial codebook respectively;Each pixel one initial code of correspondence of first two field picture
This, wherein comprising a code element in each initial codebook, what the code element was recorded is the gray scale of corresponding pixel points in the first two field picture
Value;And beginning training threshold value is set;
S12, the image after the first two field picture in preceding some two field pictures of taking-up is directed to, whenever getting next frame figure
During picture, the two field picture is converted into gray level image first, and following operate is carried out for each pixel of the frame gray level image:
By the pixel of the frame gray level image therewith previous frame gray level image same position pixel constitute current code book
Carry out code book matching, detect the frame gray level image pixel gray value whether previous frame gray level image same position pixel structure
Into current code book some code element training threshold value in the range of;
If so, then updating the symbol members variable of the code element, wherein code according to the pixel gray value of the frame gray level image
The member variable of member includes the gray value maximum and gray value minimum value of pixel;
If it is not, then setting up a new code element according to the gray value of the pixel of the frame gray level image, remembered by the new code element
The gray value of the pixel of the frame gray level image is recorded, and is added in current code book, the code book after being updated, while more
New current training threshold value;
Whether the frame that gets is last frame image in the preceding some two field pictures taken out in S13, detection S12
If it is not, then when getting next two field picture, continuing executing with step S12;
If so, then Background learning is completed, corresponding code book gets the back of the body to each pixel got according to step S12 respectively
Scape image information.
Further, training threshold value will be started in the step S11 and is set to 10.
Further, by carrying out Jia 1 current training threshold value to realize renewal in the step S12.
Further, the training threshold value scope of code element is in the step S12:The pixel gray value of code element record-
Pixel gray value+training threshold value of training threshold value~code element record.
It is preferred that, in addition to step S6, judge that the crowd density grade that each two field picture is got by step S4 or S5 is examined
Whether normal survey result;Detailed process is as follows:
Obtain the previous frame image crowd density grade and latter two field picture crowd density grade of current frame image;Will be current
Two field picture crowd density grade and its previous frame image crowd density grade and latter two field picture crowd density grade are compared:
If current frame image crowd density grade and its previous frame image crowd density grade and latter two field picture crowd are close
Degree grade is differed, then judges the detection error of current frame image crowd density grade, and the crowd according to belonging to the reality of the frame is close
Grade is spent as the training sample of the first low density crowd model, the second low density crowd model or Dense crowd model next time
This;
If current frame image crowd density grade is differed with its previous frame image crowd density grade, and with its next frame
Image crowd density grade is identical, then assert that crowd density is mutated between previous frame image and current frame image, when
The detection of prior image frame crowd density grade is normal;
If current frame image crowd density grade is identical with its previous frame image crowd density grade, and with its next frame figure
As crowd density grade is differed, then assert that crowd density is mutated between current frame image and next two field picture, when
The detection of prior image frame crowd density grade is normal.
It is preferred that, the textural characteristics include ASM energy, contrast, unfavourable balance square, entropy and auto-correlation.
It is preferred that, preceding 30 two field picture is taken out in step S1, Background learning then is carried out to this 30 two field picture, Background is obtained
As information.
The second object of the present invention is achieved through the following technical solutions:One kind is used to realize above-mentioned crowd density detection method
The crowd density detecting system based on deep learning, including camera, for obtaining in real time per two field picture, its feature exists
In, in addition to local image processing apparatus and control centre, the camera connect local image processing apparatus by data wire,
The local image processing apparatus passes through network connection control centre;
The local image processing apparatus, for detecting crowd density for each two field picture, and each two field picture is corresponding
Crowd density information sent by network to control centre;The local image processing apparatus includes:
Background modeling module, is carried out for obtaining preceding some two field pictures from camera, and for this some two field picture
Background learning, obtains background image information;
Background difference block, for each two field picture for after, according to background image information, is carried using background subtraction
Take out the target prospect image in each two field picture;
Edge detection module, for carrying out rim detection for target prospect image in image;
Pixels statisticses module, for the number of pixels of target prospect image in statistical picture, for target in statistical picture
The edge pixel number of foreground image;
Texture feature extraction module, the target prospect image for being directed to using gray level co-occurrence matrixes in image carries out texture
Feature spy extracts;
Low density crowd model building module, for according to mesh in each two field picture for belonging to low density crowd grade chosen
Mark the fitting of the relation between the number of foreground image pixel and its crowd's quantity of demarcation and obtain the first low density crowd model, or
Person is used for the edge pixel number and mark of target prospect image in each two field picture for belonging to low density crowd grade according to selection
Relation fitting between fixed crowd's quantity obtains the second low density crowd model;
Dense crowd model building module, for the training sample figure by each grade in Dense crowd grade is belonged to
As corresponding textural characteristics are inputted to BP neural network, BP neural network is trained, foundation obtains Dense crowd model;
Low density crowd Density Detection module, for being directed to each two field picture for needing to detect crowd density, by image
The number of target prospect image pixel is inputted to the first low density crowd model, gets frame crowd's quantity, when detecting people
When group's quantity exceedes certain value F, then the two field picture is inputted into high density crowd density detection module, when detecting crowd's quantity
During not less than certain value F, then crowd's quantity gets the crowd density grade of the two field picture;Detection crowd is needed for being directed to
Each two field picture of density, the edge pixel number of the target prospect image of image is inputted to the first low density crowd model, obtained
Frame crowd's quantity is got, when detecting crowd's quantity more than certain value F, then the two field picture high density crowd is inputted into close
Detection module is spent, when detecting crowd's quantity not less than certain value F, then crowd's quantity gets the crowd density of the two field picture
Grade;
Dense crowd Density Detection module, it is first for when receiving the image of low density crowd detection module input
First pass through the patterned feature that texture feature extraction module gets the two field picture target prospect image, by the textural characteristics input to
Dense crowd model, the crowd density grade of the two field picture is got by Dense crowd model.
It is preferred that, the local image processing apparatus is ARM development boards;Background in the local image processing apparatus is built
Mould module, background difference block, edge detection module, pixels statisticses module, texture feature extraction module, low density crowd model
Set up module, Dense crowd model building module, low density crowd Density Detection module and Dense crowd Density Detection mould
Block builds composition by software platform in ARM development boards.
The present invention has the following advantages and effect relative to prior art:
(1) some two field pictures carry out Background learnings before the present invention is got by camera first, obtain background image
Information, then according to background image information, target prospect image is extracted for each two field picture next got.Next choosing
Multiframe is taken out to have extracted target prospect image and belonged to the image of low density crowd grade, and to these image calibrations crowd
Quantity, to set up low density crowd model by pixels statisticses method;Select multiframe has extracted mesh by the above method simultaneously
Mark foreground image and belong to the image of each grade in Dense crowd grade as training sample, and each training sample target
The textural characteristics of foreground image are inputted to BP neural network, and BP neural network is trained, and obtain Dense crowd model;Pin
Each two field picture to needing detection crowd density, is first applied to low density crowd model, when low density crowd model is got
When crowd's quantity result is not less than certain value, then crowd density grade is judged according to crowd's quantity, when low density crowd model
When getting crowd's quantity result more than certain value, then the two field picture is inputted into high density people's group model, pass through high density people
Group model judges crowd density.Mode based on pixels statisticses mode and based on textural characteristics is combined by the present invention, is passed through
Method based on pixels statisticses gets the crowd density grade of low density crowd, and when the method based on pixels statisticses can not be entered
The Dense crowd that row correctly judges carries out the detection of crowd density grade by the method based on textural characteristics, with detection essence
The advantage that degree is high, amount of calculation is small.And the background image in the inventive method is some two field pictures before being obtained by camera
Learnt and got, wherein the time series models of each pixel are adapted to motion in modeling process, can be very well
Ground handles time jitter, and the dynamic background of complexity, therefore the back of the body got by Background learning can be arrived by Background learning
Scape image can get more accurate target prospect image, further increase the accuracy of crowd density detection.
(2) the present inventor's population density detecting system is main by camera, local image processing apparatus and control centre's structure
Into each two field picture that wherein camera is got directly is sent at local image processing apparatus, local image by data wire
Each two field picture that reason device is transmitted for camera obtains crowd density after being handled, and crowd density is sent into control
The heart, it is seen that the present invention is directly to handle image by local image processor, it is not necessary to sent out huge image by network
It is sent to backstage to be handled, it is only necessary to take least a portion of bandwidth and transmit the crowd density end value control centre detected i.e.
Can, therefore the present inventor's population density detecting system has the advantages that occupied bandwidth is few and processing speed is fast.
Brief description of the drawings
Fig. 1 is the present inventor's population density detection method flow chart.
Embodiment
With reference to embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention are not limited
In this.
Embodiment
Present embodiment discloses a kind of crowd density detection method based on deep learning, as shown in figure 1, step is as follows:
S1, obtain every two field picture in real time by camera, then some two field pictures before taking out enter to this some two field picture
Row Background learning, obtains background image information;Preceding 30 two field picture is taken out in the present embodiment and carries out Background learning.
The process of Background learning is as follows in this step:
S11, for the first two field picture in some two field pictures before taking-up gray level image is first converted into, and according to this
Each pixel of frame gray level image sets up initial codebook respectively;Each pixel one initial code of correspondence of first two field picture
This, wherein comprising a code element in each initial codebook, what the code element was recorded is the gray scale of corresponding pixel points in the first two field picture
Value;And beginning training threshold value is set;Start training threshold value in the present embodiment and be set to 10;
S12, the image after the first two field picture in preceding some two field pictures of taking-up is directed to, whenever getting next frame figure
During picture, the two field picture is converted into gray level image first, and following operate is carried out for each pixel of the frame gray level image:
By the pixel of the frame gray level image therewith previous frame gray level image same position pixel constitute current code book
Carry out code book matching, detect the frame gray level image pixel gray value whether previous frame gray level image same position pixel structure
Into current code book some code element training threshold value in the range of;The training threshold value scope of wherein code element is:The picture of code element record
Pixel gray value+training threshold value of vegetarian refreshments gray value-training threshold value~code element record.
If so, then updating the symbol members variable of the code element, wherein code according to the pixel gray value of the frame gray level image
The member variable of member includes the gray value maximum and gray value minimum value of pixel;
If it is not, then setting up a new code element according to the gray value of the pixel of the frame gray level image, remembered by the new code element
The gray value of the pixel of the frame gray level image is recorded, and is added in current code book, the code book after being updated, while more
New current training threshold value;By carrying out Jia 1 current training threshold value to realize the renewal of training threshold value in the present embodiment;
Whether the frame that gets is last frame image in the preceding some two field pictures taken out in S13, detection S12
If it is not, then when getting next two field picture, continuing executing with step S12;
If so, then Background learning is completed, corresponding code book gets the back of the body to each pixel got according to step S12 respectively
Scape image information.
S2, each two field picture for after, according to the background image information got in step S1, using background subtraction
Extract the target prospect image in each two field picture;
S3, the image for having extracted target prospect image in multiframe step S2 and having belonged to low density crowd grade is selected,
Crowd's quantity, the number according to target prospect image pixel in each two field picture of above-mentioned selection are demarcated to each two field picture selected
Relation fitting between crowd's quantity obtains the first low density crowd model, or according to target in each two field picture of above-mentioned selection
Relation fitting between the edge pixel number and crowd's quantity of foreground image obtains the second low density crowd model;Choose simultaneously
Go out and extracted target prospect image in multiframe step S2 and belonged to the image of each grade in Dense crowd grade as instruction
Practice sample, the textural characteristics of each training sample target prospect image are extracted using gray level co-occurrence matrixes, by each training sample target
The textural characteristics of foreground image are inputted to BP neural network, and BP neural network is trained, and obtain Dense crowd model;
S4, each two field picture for being directed to detection crowd density the need for being got in step S2, by the target prospect of image
The number of image pixel is inputted to the first low density crowd model, gets frame crowd's quantity, then judges the people got
Whether group's quantity exceedes certain value F, if it is not, then getting the crowd density of the two field picture according to the above-mentioned crowd's quantity got
Grade, if so, then entering step S5;
Or be directed to got in step S2 the need for detect crowd density each two field picture, by image object foreground picture
The number of the edge pixel of picture is inputted to the second low density crowd model, is got crowd's quantity of the two field picture, is then judged
Whether the crowd's quantity got exceedes certain value F, if it is not, being then determined to the frame figure according to the above-mentioned crowd's quantity got
The crowd density grade of picture is if so, then enter step S5;
S5, extracted using gray level co-occurrence matrixes the two field picture target prospect image textural characteristics, by the texture of extraction
In feature input high density people's group model, crowd density grade is got by the output of Dense crowd model.
S6, judge whether the crowd density grade testing result that each two field picture is got by step S4 or S5 is normal;Tool
Body process is as follows:
Obtain the previous frame image crowd density grade and latter two field picture crowd density grade of current frame image;Will be current
Two field picture crowd density grade and its previous frame image crowd density grade and latter two field picture crowd density grade are compared:
If current frame image crowd density grade and its previous frame image crowd density grade and latter two field picture crowd are close
Degree grade is differed, then judges the detection error of current frame image crowd density grade, and the crowd according to belonging to the reality of the frame is close
Grade is spent as the training sample of the first low density crowd model, the second low density crowd model or Dense crowd model next time
This;
If current frame image crowd density grade is differed with its previous frame image crowd density grade, and with its next frame
Image crowd density grade is identical, then assert that crowd density is mutated between previous frame image and current frame image, when
The detection of prior image frame crowd density grade is normal;
If current frame image crowd density grade is identical with its previous frame image crowd density grade, and with its next frame figure
As crowd density grade is differed, then assert that crowd density is mutated between current frame image and next two field picture, when
The detection of prior image frame crowd density grade is normal.
The textural characteristics of the target prospect image of image wherein mentioned above include ASM energy (angular second
Moment), contrast (contrast), unfavourable balance square (inverse different moment), entropy (entropy) and auto-correlation
(correlation)。
The present embodiment also discloses a kind of crowd density based on deep learning for being used to realize crowd density detection method
Detecting system, including for obtaining in real time per two field picture camera, local image processing apparatus and control centre, camera leads to
Cross data wire and connect local image processing apparatus, local image processing apparatus passes through network connection control centre;Wherein this map
As processing unit, for detecting crowd density for each two field picture, and the corresponding crowd density information of each two field picture is passed through
Network is sent to control centre;
Local image processing apparatus includes in the present embodiment:
Background modeling module, is carried out for obtaining preceding some two field pictures from camera, and for this some two field picture
Background learning, obtains background image information;
Background difference block, for each two field picture for after, according to background image information, is carried using background subtraction
Take out the target prospect image in each two field picture;
Edge detection module, for carrying out rim detection for target prospect image in image;
Pixels statisticses module, for the number of pixels of target prospect image in statistical picture, for target in statistical picture
The edge pixel number of foreground image;
Texture feature extraction module, the target prospect image for being directed to using gray level co-occurrence matrixes in image carries out texture
Feature spy extracts;
Low density crowd model building module, for according to mesh in each two field picture for belonging to low density crowd grade chosen
Mark the fitting of the relation between the number of foreground image pixel and its crowd's quantity of demarcation and obtain the first low density crowd model, or
Person is used for the edge pixel number and mark of target prospect image in each two field picture for belonging to low density crowd grade according to selection
Relation fitting between fixed crowd's quantity obtains the second low density crowd model;
Dense crowd model building module, for the training sample figure by each grade in Dense crowd grade is belonged to
As corresponding textural characteristics are inputted to BP neural network, BP neural network is trained, foundation obtains Dense crowd model;
Low density crowd Density Detection module, for being directed to each two field picture for needing to detect crowd density, by image
The number of target prospect image pixel is inputted to the first low density crowd model, gets frame crowd's quantity, when detecting people
When group's quantity exceedes certain value F, then the two field picture is inputted into high density crowd density detection module, when detecting crowd's quantity
During not less than certain value F, then crowd's quantity gets the crowd density grade of the two field picture;Detection crowd is needed for being directed to
Each two field picture of density, the edge pixel number of the target prospect image of image is inputted to the first low density crowd model, obtained
Frame crowd's quantity is got, when detecting crowd's quantity more than certain value F, then the two field picture high density crowd is inputted into close
Detection module is spent, when detecting crowd's quantity not less than certain value F, then crowd's quantity gets the crowd density of the two field picture
Grade;
Dense crowd Density Detection module, it is first for when receiving the image of low density crowd detection module input
First pass through the patterned feature that texture feature extraction module gets the two field picture target prospect image, by the textural characteristics input to
Dense crowd model, the crowd density grade of the two field picture is got by Dense crowd model.
Local image processing apparatus is Samsung S5PV210 processors, the China of Samsung S5PV210 processors in the present embodiment
Clear FS210 development boards (other development boards may be selected according to actual needs) are by program portable to ARM plate platforms.Local image procossing
Background modeling module, background difference block, edge detection module, pixels statisticses module, texture feature extraction module in device,
Low density crowd model building module, Dense crowd model building module, low density crowd Density Detection module and high density
Crowd density detection module builds composition by software platform in ARM development boards.
S5PV210 employs ARM CortexTM-A8 kernels, ARM V7 instruction set, and dominant frequency is up to 1GHZ, in 64/32
Portion's bus structures, 32/32KB data/commands level cache, 512KB L2 cache, it is possible to achieve 2000DMIPS is (per second
2,000,000,000 instruction set of computing) high performance computation ability.Comprising many powerful hardware compression functions, MPEG-1/2/ is supported
4, H.263, the encoding and decoding of format video are H.264 waited, support analog/digital TV outputs.JPEG hardware compressions, maximum is supported
8000x8000 resolution ratio.
Built-in high-performance PowerVR SGX540 3D graphics engines and 2D graphics engines, support 2D/3D figures to accelerate, are
5th generation PowerVR product, its polygon production rate is 28,000,000 polygons/second, and pixel filling rate is up to 2.5 hundred million/second, in 3D
It is substantially improved than ever with terms of multimedia, it would be preferable to support the PC rank Display Techniques such as DX9, SM3.0, OpenGL2.0.
Possess IVA3 hardware accelerators, possess outstanding graphic decoder performance, full HD, multi-standard video can be supported
The video file of coding, smooth playing and 1920 × 1080 pixels (1080p) of recording 30 frames/second, can faster decode higher
The image and video of quality, meanwhile, HD video can be output on external display by built-in HDMIv1.3.
Above-described embodiment is preferably embodiment, but embodiments of the present invention are not by above-described embodiment of the invention
Limitation, other any Spirit Essences without departing from the present invention and the change made under principle, modification, replacement, combine, simplification,
Equivalent substitute mode is should be, is included within protection scope of the present invention.
Claims (10)
1. a kind of crowd density detection method based on deep learning, it is characterised in that step is as follows:
S1, obtain every two field picture in real time by camera, then some two field pictures before taking out are carried on the back to this some two field picture
Scape learns, and obtains background image information;
S2, each two field picture for after, according to the background image information got in step S1, are extracted using background subtraction
The target prospect image gone out in each two field picture;
S3, the image for having extracted target prospect image in multiframe step S2 and having belonged to low density crowd grade is selected, to choosing
Each two field picture demarcation crowd's quantity taken out, number and people according to target prospect image pixel in each two field picture of above-mentioned selection
Relation fitting between group's quantity obtains the first low density crowd model, or according to target prospect in each two field picture of above-mentioned selection
Relation fitting between the edge pixel number and crowd's quantity of image obtains the second low density crowd model;Select simultaneously many
Target prospect image has been extracted in frame step S2 and belongs to the image of each grade in Dense crowd grade as training sample
This, the textural characteristics of each training sample target prospect image is extracted using gray level co-occurrence matrixes, by each training sample target prospect
The textural characteristics of image are inputted to BP neural network, and BP neural network is trained, and obtain Dense crowd model;
S4, each two field picture for being directed to detection crowd density the need for being got in step S2, by the target prospect image of image
The number of pixel is inputted to the first low density crowd model, gets crowd's quantity, then judges that the crowd's quantity got is
It is no to exceed certain value F, if it is not, then crowd density grade is determined according to the above-mentioned crowd's quantity got, if so, then entering step
Rapid S5;
Or be directed to got in step S2 the need for detect crowd density each two field picture, by image object foreground image
The number of edge pixel is inputted to the second low density crowd model, gets crowd's quantity, then judges the crowd's number got
Whether amount exceedes certain value F, if it is not, then determining crowd density grade according to the above-mentioned crowd's quantity got;If so, then entering
Enter step S5;
S5, extracted using gray level co-occurrence matrixes image target prospect image textural characteristics, the textural characteristics of extraction are inputted
In high density people's group model, crowd density grade is got by the output of Dense crowd model.
2. the crowd density detection method according to claim 1 based on deep learning, it is characterised in that the step S1
The process of middle Background learning is as follows:
S11, it is first converted into gray level image for the first two field picture in some two field pictures before taking-up, and according to frame ash
Each pixel of degree image sets up initial codebook respectively;Each pixel one initial codebook of correspondence of first two field picture, its
In in each initial codebook comprising a code element, code element record be corresponding pixel points in the first two field picture gray value;And
And beginning training threshold value is set;
S12, the image after the first two field picture in preceding some two field pictures of taking-up is directed to, whenever getting next two field picture
When, the two field picture is converted into gray level image first, and following operate is carried out for each pixel of the frame gray level image:
By the pixel of the frame gray level image therewith previous frame gray level image same position pixel constitute current code book carry out
Code book match, detect the frame gray level image pixel gray value whether previous frame gray level image same position pixel constitute
In the range of the training threshold value of some code element of current code book;
If so, then update the symbol members variable of the code element according to the pixel gray value of the frame gray level image, wherein code element
Member variable includes the gray value maximum and gray value minimum value of pixel;
If it is not, a new code element is then set up according to the gray value of the pixel of the frame gray level image, should by the new code element record
The gray value of the pixel of frame gray level image, and be added in current code book, the code book after being updated, work as while updating
Preceding training threshold value;
Whether the frame that gets is last frame image in the preceding some two field pictures taken out in S13, detection S12
If it is not, then when getting next two field picture, continuing executing with step S12;
If so, then Background learning is completed, corresponding code book gets Background to each pixel got according to step S12 respectively
As information.
3. the crowd density detection method according to claim 2 based on deep learning, it is characterised in that the step
Training threshold value will be started in S11 and be set to 10.
4. the crowd density detection method according to claim 2 based on deep learning, it is characterised in that the step
By carrying out Jia 1 current training threshold value to realize renewal in S12.
5. the crowd density detection method according to claim 2 based on deep learning, it is characterised in that the step
The training threshold value scope of code element is in S12:The pixel ash of pixel gray value-training threshold value of code element record~code element record
Angle value+training threshold value.
6. the crowd density detection method according to claim 1 based on deep learning, it is characterised in that also including step
S6, judge whether the crowd density grade testing result that each two field picture is got by step S4 or S5 is normal;Detailed process is such as
Under:
Obtain the previous frame image crowd density grade and latter two field picture crowd density grade of current frame image;By present frame figure
As crowd density grade and its previous frame image crowd density grade and latter two field picture crowd density grade are compared:
If current frame image crowd density grade and its previous frame image crowd density grade and latter two field picture crowd density etc.
Level is differed, then judges the detection error of current frame image crowd density grade, crowd density etc. according to belonging to the reality of the frame
Level as the first low density crowd model, the second low density crowd model or Dense crowd model next time training sample;
If current frame image crowd density grade is differed with its previous frame image crowd density grade, and with its next two field picture
Crowd density grade is identical, then assert that crowd density is mutated between previous frame image and current frame image, present frame
The detection of image crowd density grade is normal;
If current frame image crowd density grade is identical with its previous frame image crowd density grade, and with its next two field picture people
Population density grade is differed, then assert that crowd density is mutated between current frame image and next two field picture, present frame
The detection of image crowd density grade is normal.
7. the crowd density detection method according to claim 1 based on deep learning, it is characterised in that the texture is special
Levy including ASM energy, contrast, unfavourable balance square, entropy and auto-correlation.
8. the crowd density detection method according to claim 1 based on deep learning, it is characterised in that taken in step S1
Go out preceding 30 two field picture, Background learning then is carried out to this 30 two field picture, background image information is obtained.
9. a kind of detection of the crowd density based on deep learning system for being used to realize crowd density detection method described in claim 1
System, including camera, for obtaining in real time per two field picture, it is characterised in that also including local image processing apparatus and control
Center, the camera connects local image processing apparatus by data wire, and the local image processing apparatus is connected by network
Meet control centre;
The local image processing apparatus, for detecting crowd density for each two field picture, and by the corresponding people of each two field picture
Population density information is sent to control centre by network;The local image processing apparatus includes:
Background modeling module, backgrounds are carried out for obtaining preceding some two field pictures from camera, and for this some two field picture
Study, obtains background image information;
Background difference block, for each two field picture for after, according to background image information, is extracted using background subtraction
Target prospect image in each two field picture;
Edge detection module, for carrying out rim detection for target prospect image in image;
Pixels statisticses module, for the number of pixels of target prospect image in statistical picture, for target prospect in statistical picture
The edge pixel number of image;
Texture feature extraction module, the target prospect image for being directed to using gray level co-occurrence matrixes in image carries out textural characteristics
Spy extracts;
Low density crowd model building module, before according to target in each two field picture for belonging to low density crowd grade chosen
Relation fitting between the number of scape image pixel and its crowd's quantity of demarcation obtains the first low density crowd model, Huo Zheyong
The edge pixel number of target prospect image and demarcation in each two field picture for belonging to low density crowd grade according to selection
Relation fitting between crowd's quantity obtains the second low density crowd model;
Dense crowd model building module, for the training sample image pair by each grade in Dense crowd grade is belonged to
The textural characteristics answered are inputted to BP neural network, and BP neural network is trained, and foundation obtains Dense crowd model;
Low density crowd Density Detection module, for being directed to each two field picture for needing to detect crowd density, by the target of image
The number of foreground image pixel is inputted to the first low density crowd model, gets frame crowd's quantity, when detecting crowd's number
When amount exceedes certain value F, then the two field picture is inputted into high density crowd density detection module, when the crowd quantity of detecting does not surpass
When crossing certain value F, then crowd's quantity gets the crowd density grade of the two field picture;Need to detect crowd density for being directed to
Each two field picture, the edge pixel number of the target prospect image of image is inputted to the first low density crowd model, got
Frame crowd's quantity, when detecting crowd's quantity more than certain value F, then inputs high density crowd density by the two field picture and examines
Module is surveyed, when detecting crowd's quantity not less than certain value F, then crowd's quantity gets crowd density of the two field picture etc.
Level;
Dense crowd Density Detection module, for when receiving the image of low density crowd detection module input, leading to first
The patterned feature that texture feature extraction module gets the two field picture target prospect image is crossed, the textural characteristics are inputted to highly dense
People's group model is spent, the crowd density grade of the two field picture is got by Dense crowd model.
10. the crowd density detecting system according to claim 9 based on deep learning, it is characterised in that described local
Image processing apparatus is ARM development boards;Background modeling module, background difference block in the local image processing apparatus, side
Edge detection module, pixels statisticses module, texture feature extraction module, low density crowd model building module, Dense crowd mould
Type sets up module, low density crowd Density Detection module and Dense crowd Density Detection module by software in ARM development boards
Platform building is constituted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710177154.1A CN107145821A (en) | 2017-03-23 | 2017-03-23 | A kind of crowd density detection method and system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710177154.1A CN107145821A (en) | 2017-03-23 | 2017-03-23 | A kind of crowd density detection method and system based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107145821A true CN107145821A (en) | 2017-09-08 |
Family
ID=59783709
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710177154.1A Pending CN107145821A (en) | 2017-03-23 | 2017-03-23 | A kind of crowd density detection method and system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107145821A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108009477A (en) * | 2017-11-10 | 2018-05-08 | 东软集团股份有限公司 | Stream of people's quantity detection method, device, storage medium and the electronic equipment of image |
CN108021949A (en) * | 2017-12-27 | 2018-05-11 | 重庆交通开投科技发展有限公司 | Crowded degree detection method, device, system and electronic equipment |
CN108072385A (en) * | 2017-12-06 | 2018-05-25 | 爱易成技术(天津)有限公司 | Space coordinates localization method, device and the electronic equipment of mobile target |
CN108768585A (en) * | 2018-04-27 | 2018-11-06 | 南京邮电大学 | Uplink based on deep learning exempts from signaling NOMA system multi-user detection methods |
CN108810814A (en) * | 2018-02-25 | 2018-11-13 | 王昆 | A kind of orientation big data transmitted bandwidth distribution method |
CN108830145A (en) * | 2018-05-04 | 2018-11-16 | 深圳技术大学(筹) | A kind of demographic method and storage medium based on deep neural network |
CN108985256A (en) * | 2018-08-01 | 2018-12-11 | 曜科智能科技(上海)有限公司 | Based on the multiple neural network demographic method of scene Density Distribution, system, medium, terminal |
CN109508583A (en) * | 2017-09-15 | 2019-03-22 | 杭州海康威视数字技术股份有限公司 | A kind of acquisition methods and device of distribution trend |
CN110084112A (en) * | 2019-03-20 | 2019-08-02 | 太原理工大学 | A kind of traffic congestion judgment method based on image procossing |
WO2020125057A1 (en) * | 2018-12-20 | 2020-06-25 | 北京海益同展信息科技有限公司 | Livestock quantity identification method and apparatus |
CN111383340A (en) * | 2018-12-28 | 2020-07-07 | 成都皓图智能科技有限责任公司 | Background filtering method, device and system based on 3D image |
CN112364788A (en) * | 2020-11-13 | 2021-02-12 | 润联软件系统(深圳)有限公司 | Monitoring video crowd quantity monitoring method based on deep learning and related components thereof |
CN113538401A (en) * | 2021-07-29 | 2021-10-22 | 燕山大学 | Crowd counting method and system combining cross-modal information in complex scene |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982341A (en) * | 2012-11-01 | 2013-03-20 | 南京师范大学 | Self-intended crowd density estimation method for camera capable of straddling |
US20170068860A1 (en) * | 2015-09-09 | 2017-03-09 | Alex Adekola | System for measuring crowd density |
-
2017
- 2017-03-23 CN CN201710177154.1A patent/CN107145821A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982341A (en) * | 2012-11-01 | 2013-03-20 | 南京师范大学 | Self-intended crowd density estimation method for camera capable of straddling |
US20170068860A1 (en) * | 2015-09-09 | 2017-03-09 | Alex Adekola | System for measuring crowd density |
Non-Patent Citations (2)
Title |
---|
吴国栋: ""智能监控中人群密度估计方法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
柴斌: ""突发人群聚集事件智能视频监控"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109508583A (en) * | 2017-09-15 | 2019-03-22 | 杭州海康威视数字技术股份有限公司 | A kind of acquisition methods and device of distribution trend |
CN109508583B (en) * | 2017-09-15 | 2020-11-06 | 杭州海康威视数字技术股份有限公司 | Method and device for acquiring crowd distribution characteristics |
CN108009477B (en) * | 2017-11-10 | 2020-08-21 | 东软集团股份有限公司 | Image people flow number detection method and device, storage medium and electronic equipment |
CN108009477A (en) * | 2017-11-10 | 2018-05-08 | 东软集团股份有限公司 | Stream of people's quantity detection method, device, storage medium and the electronic equipment of image |
CN108072385A (en) * | 2017-12-06 | 2018-05-25 | 爱易成技术(天津)有限公司 | Space coordinates localization method, device and the electronic equipment of mobile target |
CN108021949A (en) * | 2017-12-27 | 2018-05-11 | 重庆交通开投科技发展有限公司 | Crowded degree detection method, device, system and electronic equipment |
CN108810814A (en) * | 2018-02-25 | 2018-11-13 | 王昆 | A kind of orientation big data transmitted bandwidth distribution method |
CN108810814B (en) * | 2018-02-25 | 2019-04-12 | 南京飞畅软件技术有限公司 | A kind of orientation big data transmitted bandwidth distribution method |
CN108768585A (en) * | 2018-04-27 | 2018-11-06 | 南京邮电大学 | Uplink based on deep learning exempts from signaling NOMA system multi-user detection methods |
CN108768585B (en) * | 2018-04-27 | 2021-03-16 | 南京邮电大学 | Multi-user detection method of uplink signaling-free non-orthogonal multiple access (NOMA) system based on deep learning |
CN108830145A (en) * | 2018-05-04 | 2018-11-16 | 深圳技术大学(筹) | A kind of demographic method and storage medium based on deep neural network |
CN108985256A (en) * | 2018-08-01 | 2018-12-11 | 曜科智能科技(上海)有限公司 | Based on the multiple neural network demographic method of scene Density Distribution, system, medium, terminal |
WO2020125057A1 (en) * | 2018-12-20 | 2020-06-25 | 北京海益同展信息科技有限公司 | Livestock quantity identification method and apparatus |
CN111383340A (en) * | 2018-12-28 | 2020-07-07 | 成都皓图智能科技有限责任公司 | Background filtering method, device and system based on 3D image |
CN111383340B (en) * | 2018-12-28 | 2023-10-17 | 成都皓图智能科技有限责任公司 | Background filtering method, device and system based on 3D image |
CN110084112A (en) * | 2019-03-20 | 2019-08-02 | 太原理工大学 | A kind of traffic congestion judgment method based on image procossing |
CN110084112B (en) * | 2019-03-20 | 2022-09-20 | 太原理工大学 | Traffic jam judging method based on image processing |
CN112364788A (en) * | 2020-11-13 | 2021-02-12 | 润联软件系统(深圳)有限公司 | Monitoring video crowd quantity monitoring method based on deep learning and related components thereof |
CN112364788B (en) * | 2020-11-13 | 2021-08-03 | 润联软件系统(深圳)有限公司 | Monitoring video crowd quantity monitoring method based on deep learning and related components thereof |
CN113538401A (en) * | 2021-07-29 | 2021-10-22 | 燕山大学 | Crowd counting method and system combining cross-modal information in complex scene |
CN113538401B (en) * | 2021-07-29 | 2022-04-05 | 燕山大学 | Crowd counting method and system combining cross-modal information in complex scene |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107145821A (en) | A kind of crowd density detection method and system based on deep learning | |
EP3882808B1 (en) | Face detection model training method and apparatus, and face key point detection method and apparatus | |
Miao et al. | Neuromorphic vision datasets for pedestrian detection, action recognition, and fall detection | |
CN110390262B (en) | Video analysis method, device, server and storage medium | |
US20210279513A1 (en) | Target detection method and apparatus, model training method and apparatus, device, and storage medium | |
US10735694B2 (en) | System and method for activity monitoring using video data | |
EP3937073A1 (en) | Method for video classification, method and device for model training, and storage medium | |
CN105051754B (en) | Method and apparatus for detecting people by monitoring system | |
WO2019210555A1 (en) | People counting method and device based on deep neural network and storage medium | |
CN106162177A (en) | Method for video coding and device | |
TW202026948A (en) | Methods and devices for biological testing and storage medium thereof | |
TW202121233A (en) | Image processing method, processor, electronic device, and storage medium | |
CN111553247B (en) | Video structuring system, method and medium based on improved backbone network | |
US10438405B2 (en) | Detection of planar surfaces for use in scene modeling of a captured scene | |
CN111950457A (en) | Oil field safety production image identification method and system | |
CN114373162A (en) | Dangerous area personnel intrusion detection method and system for transformer substation video monitoring | |
CN117953578A (en) | Elevator passenger behavior detection method based on depth vision technology | |
CN115988182B (en) | Digital twinning-oriented remote video monitoring method | |
CN117676121A (en) | Video quality assessment method, device, equipment and computer storage medium | |
CN111739098B (en) | Speed measuring method and device, electronic equipment and storage medium | |
CN109040673A (en) | Method of video image processing, device and the device with store function | |
CN113705309A (en) | Scene type judgment method and device, electronic equipment and storage medium | |
CN111062337B (en) | People stream direction detection method and device, storage medium and electronic equipment | |
WO2023029289A1 (en) | Model evaluation method and apparatus, storage medium, and electronic device | |
CN110991365B (en) | Video motion information acquisition method, system and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170908 |
|
RJ01 | Rejection of invention patent application after publication |