CN108280427A - A kind of big data processing method based on flow of the people - Google Patents

A kind of big data processing method based on flow of the people Download PDF

Info

Publication number
CN108280427A
CN108280427A CN201810067482.0A CN201810067482A CN108280427A CN 108280427 A CN108280427 A CN 108280427A CN 201810067482 A CN201810067482 A CN 201810067482A CN 108280427 A CN108280427 A CN 108280427A
Authority
CN
China
Prior art keywords
people
connecting rod
axis connection
image
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810067482.0A
Other languages
Chinese (zh)
Other versions
CN108280427B (en
Inventor
肖会
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou All Things Collection Industrial Internet Technology Co ltd
Original Assignee
CHENGDU DINGZHIHUI SCIENCE AND TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHENGDU DINGZHIHUI SCIENCE AND TECHNOLOGY Co Ltd filed Critical CHENGDU DINGZHIHUI SCIENCE AND TECHNOLOGY Co Ltd
Priority to CN201810067482.0A priority Critical patent/CN108280427B/en
Publication of CN108280427A publication Critical patent/CN108280427A/en
Application granted granted Critical
Publication of CN108280427B publication Critical patent/CN108280427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A kind of big data processing method based on flow of the people, including:Image Acquisition height, the angle and direction of image capture module are set;Acquire the video of target area;Determine the visual field in a frame image of the video of target area and ambient light intensity;The video image for obtaining target area, divides effective detection zone;Carry out edge detection and people's detection;Detect that people then tracks the moving direction of people and whether matches existing tracking setting;Whether matching then judges people across virtual setting line;Statistical module is then updated across the virtual setting line, increases its statistical counting;Step is repeated until the people flow rate statistical in the region is completed;The flow of the people of statistics then sends out alarm signal more than alarm threshold value;The flow of the people of statistics is more than that threshold value is reminded then to send out alerting signal.The calculation resources that this method can reduce the big data processing of flow of the people occupy, and reduce hardware cost and structure complexity, reduce power consumption, and flexible modulation monitors range, improve the accuracy that monitoring counts.

Description

A kind of big data processing method based on flow of the people
Technical field
The present invention relates to big data fields, and more specifically, are related to a kind of big data processing side based on flow of the people Method.
Background technology
With the fast development and progress of society, urban population is on the increase, the flow of the people (flow of public place Ofpeople) increase year by year, especially in specific time point and place, such as across year, festivals or holidays, sooner or later peak on and off duty, Scene in subway, square, school, promotion market, railway station, ticket lobby et al. flow set.However, when there is burst reason When, such as when by massive promotional campaign, there is across year concert, when spring transportation is entered the station or disaster occurs, as generation fire, When shake, flow of the people is surging, if flow of the people not being calculated and being controlled, may cause the group time, such as 2014 years It in across the year nocturnalism in Shanghai, was met New Year because numerous tourists are gathered in outbeach, Shanghai City Huangpu District outbeach Chen Yi is wide December 31 Southeast corner leads to the unbalance tumble of bottom someone at the walkway ladder of Huangpu River sightseeing platform, then cause massed fall, It laminates, swarm and jostlement event is caused to occur, cause 36 people dead, 49 people are injured;This is to prevent to prepare to masses' sexuality together It is insufficient, field management is ineffective, reply is mishandling and the swarm and jostlement of initiation and creates greater casualties public with serious consequence Safety responsibility event.In addition, for the place of the confined space, such as market, railway station, subway station, once tread event occurs Or disaster, if be not sure to flow of the people, evacuating personnel and rescue will be extremely difficult.For this purpose, grasping specific time, spy in real time The flow of the people for determining place is necessary.And in the prior art, there are many acquisition methods about flow of the people.Flow of the people monitors System handles the case where single disengaging, but can not solve the pressing problem that the big place of flow of the people is faced;The number of people is detected Method formed number of people target trajectory, but calculation resources occupy it is more, cost is too high, complicated, and power consumption is excessive, It is unfavorable for the use of actual scene.
In addition, in above-mentioned public place, in order to enhance the comprehensive of monitoring, the quantity for increasing camera is needed, and such as Gynophore can cause waste and the electricity of hardware resource to two or more image capture modules of the two or more scene setting The inefficient use of power, and cause control module numerous, the departmental staff for executing supervision or safety supervision also correspondingly increases.
Based on this, it is necessary to invent a kind of big data processing method based on flow of the people that can solve problem above.
Invention content
An object of the present invention is to provide a kind of big data processing method based on flow of the people, can reduce flow of the people The calculation resources of big data processing occupy, reduce hardware cost and structure complexity, reduce power consumption, and flexible modulation is supervised Depending on range, the accuracy that monitoring counts is improved.
The technical solution that the present invention takes to solve above-mentioned technical problem is:A kind of big data processing based on flow of the people Method, including:Step 1, Image Acquisition height, the angle and direction of image capture module are set;Step 2, target area is acquired Video;Step 3, the visual field in a frame image of the video of target area and ambient light intensity are determined, if all met pre- If condition then enters step 4;Otherwise the specification for reacquiring the above-mentioned parameter for being unsatisfactory for preset condition is arranged and is sent to control Module executes the operation of step 1 with return to step 1;Step 4, the video image for obtaining target area divides effective detection zone Domain;Step 5, edge detection and people's detection are carried out;Step 6, if it is determined that detect people, then further track the moving direction of people Whether existing tracking setting is matched;Otherwise return to step 5;Step 7:If it does, then whether judgement people is arranged across virtual Line;Otherwise return to step 6;Step 8:If across the virtual setting line, statistical module is updated, its statistical counting is increased;It is no Then return to step 6;Step 9, step 6-8 is repeated, until the people flow rate statistical in the region is completed;Step 10, if step The flow of the people of rapid 9 statistics is more than alarm threshold value, then sends out alarm signal;If the flow of the people that step 9 counts is more than to remind threshold value, Then send out alerting signal.
In one embodiment, which includes:First fixing piece 2, the first axis connection 3, head rod 4, the second axis connection 5, the second connecting rod 6, third connecting rod 10, third axis connection 7, the 4th connecting rod 8, the 4th axis connection 9, Five axis connections 12, camera 11, wherein one end of the first fixing piece 2 is fixed to wall 1, the other end of the first fixing piece 2 passes through First axis connection 3 is connected with head rod 4 and the two can relatively move, and the both ends of head rod 4 are respectively with second One end of connecting rod 6, one end of third connecting rod 10 are connected and respectively can be relative to the second connecting rod 6 and third connecting rods 10 is mobile simultaneously, the left end and middle-end of the 4th connecting rod 8 respectively with the other end of the second connecting rod 6, third connecting rod 10 it is another One end is connected and respectively can be relative to the second connecting rod 6 and third connecting rod 10 while mobile, the right end of the 4th connecting rod 8 It is fixedly installed the 5th axis connection 12, the 5th axis connection 12 is connected with camera 11 and camera 11 can be relative to the 5th axis 12 movement of connection;Wherein the first axis connection 3, the second axis connection 5, third axis connection 7, the 4th axis connection 9 are internally provided with and can revolve The motor turned, for adjusting the angle between two or three components that it is respectively connected;5th axis connection 12 is internally provided with Rotatable motor, for rotating camera 11 to realize moving left and right for its shooting visual angle and the visual field;Pass through the above motor Joint is adjusted, to provide rational shooting angle;In the structure of Fig. 2, it is arranged by 3 inside of the first axis connection rotatable Motor adjust the angle between the first fixing piece 2 and head rod 4, to perpendicular to wall plane, from left side to the right The visual angle of side is seen, moving up and down for the shooting visual field may be implemented;Pass through the first axis connection 3, the second axis connection 5, third axis connection 7, the joint operation for the rotatable motor that 9 inside of the 4th axis connection is arranged, may make by head rod 4, the second connecting rod 6, the quadrilateral structure change in shape that third connecting rod 10, the 4th connecting rod 8 are formed, so that presently shown in fig. 2 Visual angle is seen, moving left and right for the shooting visual field may be implemented;The rotatable electricity being arranged in conjunction with 12 inside of the 5th axis connection above-mentioned Machine, adjustable and rotating camera 11 are seen with the vertical angle of view in the big circle of right side dotted line, shooting visual angle may be implemented and regard Wild moves left and right.
In one embodiment, in step 3, the visual field in a frame image of the video of target area and ambient light are determined Intensity further comprises:Step 31, the effective coverage in the visual field in a frame image of the video of target area is determined, this is effectively Other than exclusion luminaire, display equipment, roof or other impossible regions that people occur in the visual field of a frame image is in region Region;Step 32, if the ratio that effective coverage accounts for the visual field in a frame image is less than preset threshold value, target area The setting in the visual field in one frame image of video is unsatisfactory for condition, then reacquires the rule for the above-mentioned parameter for being unsatisfactory for preset condition Model is arranged and is sent to control module, and the specification in the visual field is set as:It is regarded according to Image Acquisition height, angle and direction and shooting Wild mapping table searches corresponding Image Acquisition height, angle and direction setting, and then using upper by adjusting axis connection Under, left and right, one or more of be moved forward and backward and to adjust Image Acquisition height, angle and direction;Step 33, it is set according to shining Light that is standby and/or being transmitted by transparency material or vitrina, determines the ring in a frame image of the video of target area Border luminous intensity;Step 34, if ambient light intensity is less than predetermined threshold value, the above-mentioned parameter for being unsatisfactory for preset condition is reacquired Specification be arranged and be sent to control module, the specification of the ambient light intensity be set as using infrared image carry out target with Track;Otherwise visible images are continuing with and carry out target following;Step 35, if the above parameter all meets preset condition, Enter step 4;Otherwise return to step 1 executes the operation of step 1.
In one embodiment, in step 4, the video image for obtaining target area divides effective detection zone, wherein The current frame image of video image is divided into the grid of M × N, each grid is an effective detection zone, and wherein M and N are big In equal to 2 positive integers;By gridding image, multiple virtual setting lines are set and judge to calculate, calculation can be optimized, subtracted The occupancy of few process resource and power consumption.
In one embodiment, in steps of 5, it carries out edge detection and people's detection includes:Step 51, it is held using filter Row smooth operation obtains the convolution results of present image and filter;The gradient magnitude of Difference Calculation adjacent pixel, when gradient is big When the threshold value of setting, illustrate that the color change near the point is larger, it is believed that be edge;Step 52, using these edges by mesh Mark is opened with background segment, and the fritter for being less than certain threshold value for area is deleted and is smoothed;Step 53, according to working as Each pixel Pix (x, y) of preceding image, calculates the probability of each pixel: Its In, x, y are the coordinates of pixel Pix (x, y), and n is all pixelsMean value, δ2It is covariance function;If pixel Pix The probability P [Pix (x, y)] of (x, y) is less than threshold value, then the pixel is considered as the pixel of the first kind;Otherwise the pixel is recognized To be the pixel of Second Type;When the pixel belongs to the pixel of Second Type, then need to update in the corresponding background of the pixel Pixel;Step 54, background image, when the pixel is considered as the pixel of the first kind, background pixel Bg are updatedt+1(x, y) =Pix (x, y);When the pixel is considered as the pixel of Second Type, background pixel Bgt+1(x, y)=α × Pix (x, y)+ (1-α)×Bgt(x, y), wherein t are the time, and t+1 indicates that next unit interval, α are update coefficients;Step 55, by the first kind The set of the pixel of type takes intersection with area-of-interest, omits image border part, is filtered out in the region that area is more than threshold value Noise;In the region that each area is more than threshold value, extraction pixel grey scale standard deviation is more than the region of threshold value, obtains the second target Region;Step 56, the gradient map of the quantization for the second target area being calculated, and by according to being generated in training process The model of people to export the probability graph of gradient magnitude for the gradient map of each quantization;Step 57, in the second target area, extraction Feature;The second target area is scanned using the rectangular window of fixed size, and will be in image according to trained model and probability graph Object classify;Computing object is the probability of people:P=W (mo) * P (ma), wherein W (mo) are to be obtained according to trained model Object the corresponding weighted value of classification, the summation of the probability graph for the identity element that P (ma) is included for the object;If right As being more than preset threshold value for the probability of people, then there are people for preliminary judgement;Otherwise primarily determine that there is no people.
In one embodiment, in step 6-8, it detects people if primarily determined, further tracks the movement of people Whether direction matches existing tracking setting, if it does, then judging people whether across virtual setting line, if virtual across this Line is set, then updates statistical module, increases its statistical counting, further comprise:Select people in the database of big data processing section Line is virtually set based on the more line of number, and the virtual setting line in perpendicular direction is secondary virtual setting line;For main virtual Be arranged primarily determined in line and image there are the object of people, build the first straight line equation of main virtual setting line in the picture, And build primarily determined in current image frame and next picture frame, in image it is straight where there are the displacement of the object of people The second equation of line, and when the equation group that the two equations are constituted has solution, determine update statistical module, increase its statistics It counts;If when the equation group that the two equations are constituted is without solution, deposited for what is primarily determined in secondary virtual setting line and image In the object of people, build the third linear equation of secondary virtual setting line in the picture, and when this second and third equation constitute Equation group when there is solution, determine update statistical module, increase its statistical counting, otherwise people is not present in final determine.
In one embodiment, the first fixing piece 2, head rod 4, the second connecting rod 6, third connecting rod the 10, the 4th Connecting rod 8 is internally provided with circuit, for connecting and controlling the first axis connection 3, the second axis connection 5, third axis connection the 7, the 4th The rotatable motor that axis connection 9,12 inside of the 5th axis connection are arranged, and the left side of the circuit inside the first fixing piece 2 is logical It crosses the wiring of wall interiors or the wiring of wall surface and is connected to control module.Second connecting rod 6 and third connecting rod 10 are grown Spend the length phase of the equal part between third axis connection 7 and the 4th axis connection 9 on head rod 4 and the 4th connecting rod 8 Deng.Second connecting rod 6 and third connecting rod 10 are one or more levels set stack structure, per level-one intussusception structure include sleeve and Interior bar, the diameter of sleeve is more than the diameter of interior bar, and sleeve is sleeved on outside interior bar, and interior bar is comprised in the side in sleeve Edge, which is set up separately, is equipped with hook, so that this grade covers stack structure has fixed maximum extension length in stretching, extension;Wherein covered in multistage In the case of stack structure, previous stage covers the interior bar of stack structure i.e. the sleeve of rear stage set stack structure.Second connecting rod 6 and Three connecting rods 10 include magnetostrictive member, and one end of the magnetostrictive member of the second connecting rod 6 connects the first axis connection 3 simultaneously And other end connection third axis connection 7, one end of the magnetostrictive member of third connecting rod 10 connect the second axis connection 5 and The other end connects the 4th axis connection 9, for the second connecting rod 6 and third connecting rod to be extended or shunk under the control of control command 10 magnetostrictive member, the length for effectively extending or shortening the second connecting rod 6 and third connecting rod 10 are big to realize The Forward of range or after move image capture module, further enhance flexibility.
Description of the drawings
In the accompanying drawings by way of example rather than the embodiment of the present invention, wherein phase are shown by way of limitation Same reference numeral indicates identical element, wherein:
According to an exemplary embodiment of the invention, Fig. 1 illustrates a kind of flow of the big data processing method based on flow of the people Schematic diagram.
According to an exemplary embodiment of the invention, the structure chart of Fig. 2 pictorial images acquisition module.
According to an exemplary embodiment of the invention, the general flow chart of Fig. 3 illustrated steps 3.
According to an exemplary embodiment of the invention, the general flow chart of Fig. 4 illustrated steps 5.
Specific implementation mode
Before carrying out detailed description below, illustrates and run through certain words and phrase used in patent document It may be advantageous for definition:Term " comprising " and "comprising" and its derivative mean to include without limiting;Term "or" is Including, it is meant that and/or;Phrase " with ... it is associated ", " associated with it " and its derivative might mean that including quilt Be included in ... it is interior, with ... interconnection, including, be comprised in ... it is interior, be connected to ... or with ... connect, be coupled to ... or With ... couple, can be with ... communicate, with ... cooperation interweaves, and side by side, approaches ..., be bound to ... or with ... binding, tool Have, attribute having ..., etc.;And term " controller " mean to control any equipment of at least one operation, system or its Component, such equipment may be realized with some combinations of hardware, firmware or software or wherein at least two.It should be noted that Be:Functionality associated with any specific controller may be centralization or distributed, either local or remote Journey.The definition for certain words and phrase is provided through patent document, it should be understood by those skilled in the art that:If not In most cases, in many cases, such definition is suitable for word and phrase existing and define in this way not To use.
In the following description, refer to the attached drawing and several specific embodiments are diagrammatically shown.It will be appreciated that: It is contemplated that and other embodiment can be made without departing from the scope of the present disclosure or spirit.Therefore, described in detail below should not be by Think in a limiting sense.
According to an exemplary embodiment of the invention, Fig. 1 illustrates a kind of flow of the big data processing method based on flow of the people Schematic diagram.Specifically, this method includes:
Step 1, Image Acquisition height, the angle and direction of image capture module are set;
Step 2, the video of target area is acquired;
Step 3, the visual field in a frame image of the video of target area and ambient light intensity are determined, if all met pre- If condition then enters step 4;Otherwise the specification for reacquiring the above-mentioned parameter for being unsatisfactory for preset condition is arranged and is sent to control Module executes the operation of step 1 with return to step 1;
Step 4, the video image for obtaining target area divides effective detection zone;
Step 5, edge detection and people's detection are carried out;
Step 6, if it is determined that detect people, then whether the moving direction of further tracking people matches existing tracking and set It sets;Otherwise return to step 5;
Step 7:If it does, then whether judgement people is across virtual setting line;Otherwise return to step 6;
Step 8:If across the virtual setting line, statistical module is updated, its statistical counting is increased;Otherwise return to step 6;
Step 9, step 6-8 is repeated, until the people flow rate statistical in the region is completed;
Step 10, if the flow of the people that step 9 counts is more than alarm threshold value, alarm signal is sent out;If step 9 counts Flow of the people be more than remind threshold value, then send out alerting signal.
According to an exemplary embodiment of the invention, the structure chart of Fig. 2 pictorial images acquisition module.Wherein, the left side of Fig. 2 It is divided into the side view of the image capture module, and the big circular portion of the dotted line of the rightmost side is about left-hand broken line small circular part , the enlarged drawing of image observed by the right side plan perpendicular to aforementioned side view;The image capture module includes:First Fixing piece 2, the first axis connection 3, head rod 4, the second axis connection 5, the second connecting rod 6, third connecting rod 10, third axis connect The 7, the 4th connecting rod 8, the 4th axis connection 9, the 5th axis connection 12, camera 11 are connect, wherein one end of the first fixing piece 2 is fixed to The other end of wall 1, the first fixing piece 2 is connected by the first axis connection 3 with head rod 4 and the two opposite can be moved Dynamic, the both ends of head rod 4 are connected respectively with one end of one end of the second connecting rod 6, third connecting rod 10 and difference Can be mobile simultaneously relative to the second connecting rod 6 and third connecting rod 10, the left end and middle-end of the 4th connecting rod 8 connect with second respectively The other end of extension bar 6, the other end of third connecting rod 10 are connected and can be connected respectively with third relative to the second connecting rod 6 Bar 10 is mobile simultaneously, and the right end of the 4th connecting rod 8 is fixedly installed the 5th axis connection 12, the 5th axis connection 12 and 11 phase of camera It connects and camera 11 can be moved relative to the 5th axis connection 12;Wherein the first axis connection 3, the second axis connection 5, third axis connect It connects the 7, the 4th axis connection 9 and is internally provided with rotatable motor, for adjusting between two or three components that it is respectively connected Angle;5th axis connection 12 is internally provided with rotatable motor, for rotating camera 11 with realize its shooting visual angle and The visual field moves left and right;It is adjusted by the joint of the above motor, to provide rational shooting angle.In the structure of Fig. 2, lead to The angle between 3 inside of the first axis connection rotatable motor the first fixing piece 2 of adjusting being arranged and head rod 4 is crossed, from And perpendicular to wall plane, in terms of the visual angle to the right of left side, moving up and down for the shooting visual field may be implemented;Pass through first axle The joint operation for the rotatable motor that the 3, second axis connection 5 of connection, third axis connection 7,9 inside of the 4th axis connection are arranged, can So that being become by the quadrilateral structure shape that head rod 4, the second connecting rod 6, third connecting rod 10, the 4th connecting rod 8 are formed Change, so that presently shown visual angle is seen in fig. 2, moving left and right for the shooting visual field may be implemented;In conjunction with the above-mentioned 5th The rotatable motor that 12 inside of axis connection is arranged, adjustable and rotating camera 11, with hanging down in the big circle of right side dotted line Direct-view angle is seen, moving left and right for shooting visual angle and the visual field may be implemented.Pass through the axis connection of above series of and the connection of connecting rod It closes or individually operated, it can be achieved that the top to bottom, left and right, front and rear of image capture module are moved, to effectively enhance what image obtained Flexibility provides flexible source data for the big data processing subsequently based on flow of the people.
The present invention can in real time be adjusted using camera, such as:At the train station under the scene in ticketing hall, work as consideration When the opening quantity of ticket window, the queuing troop that the camera alignment of image capture module has been opened to window is needed;And work as When considering the environmental carrying capacity in ticketing hall, need to be lined up and pacified outside the camera aligned inlet and entrance of image capture module The region of inspection, to determine the need for outside hall current limliting or increase the complexity of outdoor queuing railing and/or increase to The distance of entrance, with effective current limliting to avoid the generation of group time.
Preferably, the first fixing piece 2, head rod 4, the second connecting rod 6, third connecting rod 10, in the 4th connecting rod 8 Portion is provided with circuit, for connecting and controlling the first axis connection 3, the second axis connection 5, third axis connection 7, the 4th axis connection 9, Rotatable motor that the inside of five axis connection 12 is arranged, and the circuit inside the first fixing piece 2 passes on left wall interiors Wiring or wall surface wiring and be connected to control module.
Preferably, in the first axis connection 3, the second axis connection 5, third axis connection 7, the 4th axis connection 9, the 5th axis connection 12 One or more be internally provided with radio receiving transmitting module, the control command sent out for receiving control module, and according to the control System orders to adjust the rotation for two or three components that the axis connection is connected.
Preferably, 10 equal length of the second connecting rod 6 and third connecting rod, on head rod 4 and the 4th connecting rod 8 The equal length of part between third axis connection 7 and the 4th axis connection 9, i.e. the second connecting rod 6, third connecting rod 10, first Connecting rod 4 and the 4th connecting rod 8 form parallelogram.By the setting, the back-and-forth motion of rule may be implemented, also allow for controlling Molding block calculates and is arranged rotation angle and the direction of electrode.
Preferably, the second connecting rod 6 and third connecting rod 10 are one or more levels set stack structure (not shown), That is, every level-one intussusception structure includes sleeve and interior bar, the diameter of sleeve is more than the diameter of interior bar, and sleeve is sleeved on outside interior bar Portion, and interior bar is comprised in the marginal portion in sleeve and is provided with hook, has admittedly in stretching, extension so that this grade covers stack structure Fixed maximum extension length;Wherein in the case where multistage covers stack structure, previous stage covers the interior bar i.e. rear stage of stack structure Cover the sleeve of stack structure.
Preferably, the second connecting rod 6 and third connecting rod 10 include magnetostrictive member (not shown), the second connecting rod 6 One end of the magnetostrictive member connect the first axis connection 3 and other end connection third axis connection 7, third connecting rod 10 One end of the magnetostrictive member connects the second axis connection 5 and the other end connects the 4th axis connection 9, in control command The lower elongation of control or the magnetostrictive member for shrinking the second connecting rod 6 and third connecting rod 10, for effectively extending or shortening the The length of two connecting rods 6 and third connecting rod 10, to realize on a large scale Forward or after move image capture module, further increase Strong flexibility.
According to an exemplary embodiment of the invention, the general flow chart of Fig. 3 illustrated steps 3.Specifically, in step 3, really Set the goal region video a frame image in the visual field and ambient light intensity further comprise:
Step 31, the effective coverage in the visual field in a frame image of the video of target area is determined, which is The region other than luminaire, display equipment, roof or other impossible regions that people occur is excluded in the visual field of one frame image;
Step 32, if the ratio that effective coverage accounts for the visual field in a frame image is less than preset threshold value, target area Video a frame image in the setting in the visual field be unsatisfactory for condition, then reacquire the above-mentioned parameter for being unsatisfactory for preset condition Specification is arranged and is sent to control module, and the specification in the visual field is set as:According to Image Acquisition height, angle and direction and shooting The mapping table in the visual field searches corresponding Image Acquisition height, angle and direction setting, and then using upper by adjusting axis connection Under, left and right, one or more of be moved forward and backward and to adjust Image Acquisition height, angle and direction;
Step 33, according to luminaire and/or the light transmitted by transparency material or vitrina, target area is determined Ambient light intensity in one frame image of the video in domain;
Step 34, if ambient light intensity is less than predetermined threshold value, the above-mentioned parameter for being unsatisfactory for preset condition is reacquired Specification be arranged and be sent to control module, the specification of the ambient light intensity be set as using infrared image carry out target with Track;Otherwise visible images are continuing with and carry out target following;
Step 35, if the above parameter all meets preset condition, 4 are entered step;Otherwise return to step 1 executes step Rapid 1 operation.
Preferably, in step 4, the video image for obtaining target area divides effective detection zone, wherein by video figure The current frame image of picture is divided into the grid of M × N, and each grid is an effective detection zone, and wherein M and N are greater than equal to 2 just Integer.Divided by gridding, compared with the prior art in simple horizontal partition or purely longitudinal segmentation, follow-up empty It proposes the setting for setting line and uses single judgment criteria across virtual be arranged in the judgement of line, do not consider the various of people's movement Property and dispersibility and the case where small movements, cause the movement by a small margin of the people in perpendicular direction not observed effectively, With the movement on virtual setting line injustice line direction tracking and detection can lead to more calculation amount, so as to cause excessive The occupancy and excessive power consumption, the present invention of process resource are due to the place of the prior art:By gridding image, if It sets multiple virtual setting lines and judges to calculate, calculation can be optimized, reduce occupancy and the power consumption of process resource.
According to an exemplary embodiment of the invention, the general flow chart of Fig. 4 illustrated steps 5.Specifically, in steps of 5, into Row edge detection and people's detection include:
Step 51, smooth operation is executed using filter, obtains the convolution results of present image and filter;Difference Calculation The gradient magnitude of adjacent pixel illustrates that the color change near the point is larger, it is believed that be side when gradient is more than the threshold value of setting Edge;
Step 52, target and background segment are opened using these edges, and is less than the fritter of certain threshold value for area It deletes and is smoothed;
Step 53, according to each pixel Pix (x, y) of present image, the probability of each pixel is calculated:
Wherein, x, y are the coordinates of pixel Pix (x, y), and n is all pixelsMean value, δ2It is covariance function; If the probability P [Pix (x, y)] of pixel Pix (x, y) is less than threshold value, which is considered as the pixel of the first kind;Otherwise The pixel is considered as the pixel of Second Type;When the pixel belongs to the pixel of Second Type, then need to update the pixel pair Pixel in the background answered;
Step 54, background image, when the pixel is considered as the pixel of the first kind, background pixel Bg are updatedt+1(x, Y)=Pix (x, y);When the pixel is considered as the pixel of Second Type, background pixel Bgt+1(x, y)=α × Pix (x, y)+ (1-α)×Bgt(x, y), wherein t are the time, and t+1 indicates that next unit interval, α are update coefficients;
Step 55, the set of the pixel of the first kind and area-of-interest are taken into intersection, image border part is omitted, in face Product, which is more than in the region of threshold value, filters out noise;In the region that each area is more than threshold value, extraction pixel grey scale standard deviation is more than The region of threshold value obtains the second target area;
Step 56, the gradient map of the quantization for the second target area being calculated, and by according to raw in training process At people model come for each quantization gradient map export gradient magnitude probability graph;
Step 57, in the second target area, feature is extracted;The second target area is scanned using the rectangular window of fixed size Domain, and the object in image is classified according to trained model and probability graph;Computing object is the probability of people:P=W (mo) * P (ma), wherein W (mo) are the corresponding weighted value of classification of the object obtained according to trained model, and P (ma) is that this is right As the summation of the probability graph for the identity element for being included;If the probability that object is people is more than preset threshold value, preliminary judgement There are people;Otherwise primarily determine that there is no people.
Preferably, in step 6-8, people is detected if primarily determined, further whether is the moving direction of tracking people Existing tracking setting is matched, if it does, then whether people is judged across virtual setting line, if across the virtual setting line, Statistical module is then updated, increases its statistical counting, further comprises:Select number in the database of big data processing section more Line is virtually set based on line, and the virtual setting line in perpendicular direction is secondary virtual setting line;For main virtual setting line and Primarily determined in image there are the object of people, build the first straight line equation of main virtual setting line in the picture, and build Primarily determined in current image frame and next picture frame, in image there are second of the straight line where the displacement of the object of people Equation, and when the equation group that the two equations are constituted has solution, determine update statistical module, increase its statistical counting;Such as Fruit when the equation group that the two equations are constituted is without solution, for primarily determined in secondary virtual setting line and image there are pairs of people As, build the third linear equation of secondary virtual setting line in the picture, and when this second and the equation group that constitutes of third equation It when in the presence of solution, determines update statistical module, increases its statistical counting, otherwise people is not present in final determine.
Above-mentioned each technical term is the routine techniques term with common meaning in this field, in order not to obscure this The emphasis of invention, is not further explained it herein.
To sum up, in the inventive solutions, the big data processing method based on flow of the people by using a kind of, The calculation resources that the big data processing of flow of the people can be reduced occupy, and reduce hardware cost and structure complexity, reduce electric power and disappear Consumption, and flexible modulation monitors range, improves the accuracy that monitoring counts.
It will be appreciated that:The example and reality of the present invention can be realized in the form of the combination of hardware, software or hardware and software Apply example.As described above, any main body for executing this method can be stored, in the form of volatility or non-volatile holographic storage, such as Storage device, as ROM, whether no matter can erasing or is rewritable, or in the form of a memory, such as RAM, storage core Piece, equipment or integrated circuit or on the readable medium of light or magnetic, such as CD, DVD, disk or tape.It will be appreciated that: Storage device and storage medium are suitable for storing the example of the machine readable storage of one or more programs, upon being performed, One or more of programs realize the example of the present invention.Via any medium, such as it is loaded with by wired or wireless coupling Signal of communication can electronically transmit the example of the present invention, and example includes suitably identical content.
It should be noted that:Because the calculation resources that the present invention solves the big data processing that can reduce flow of the people account for With reduction hardware cost and structure complexity reduce power consumption, and flexible modulation monitors range, improves the essence of monitoring counting The technical issues of exactness, uses in field of computer technology technical staff after reading this description according to its training centre energy The technological means of understanding, and advantageous effects are obtained, so claimed scheme belongs to special in the following claims Technical solution in sharp method meaning.In addition, because the technical solution that appended claims are claimed can manufacture in the industry Or use, therefore the program has practicability.
The above, only preferable specific implementation mode of the invention, but protection scope of the present invention is not limited to This, any one skilled in the art in the technical scope disclosed by the present invention, the variation that can readily occur in or replaces It changes, should all forgive within protection scope of the present invention.Unless be otherwise expressly recited, otherwise disclosed each feature is only It is equivalent or similar characteristics a example for general series.Therefore, protection scope of the present invention should be with claims Subject to protection domain.

Claims (10)

1. a kind of big data processing method based on flow of the people, including:
Step 1, Image Acquisition height, the angle and direction of image capture module are set;
Step 2, the video of target area is acquired;
Step 3, the visual field in a frame image of the video of target area and ambient light intensity are determined, item is preset if all met Part then enters step 4;Otherwise the specification for reacquiring the above-mentioned parameter for being unsatisfactory for preset condition is arranged and is sent to control mould Block executes the operation of step 1 with return to step 1;
Step 4, the video image for obtaining target area divides effective detection zone;
Step 5, edge detection and people's detection are carried out;
Step 6, if it is determined that detect people, then further whether the moving direction of tracking people matches existing tracking setting;It is no Then return to step 5;
Step 7:If it does, then whether judgement people is across virtual setting line;Otherwise return to step 6;
Step 8:If across the virtual setting line, statistical module is updated, its statistical counting is increased;Otherwise return to step 6;
Step 9, step 6-8 is repeated, until the people flow rate statistical in the region is completed;
Step 10, if the flow of the people that step 9 counts is more than alarm threshold value, alarm signal is sent out;If the people that step 9 counts Flow is more than to remind threshold value, then sends out alerting signal.
2. the big data processing method according to claim 1 based on flow of the people, wherein:
The image capture module includes:First fixing piece 2, the first axis connection 3, head rod 4, the second axis connection 5, second connect Extension bar 6, third connecting rod 10, third axis connection 7, the 4th connecting rod 8, the 4th axis connection 9, the 5th axis connection 12, camera 11, Wherein one end of the first fixing piece 2 is fixed to wall 1, and the other end of the first fixing piece 2 is connect by the first axis connection 3 with first Bar 4 is connected and the two can relatively move, and the both ends of head rod 4 are connect with one end of the second connecting rod 6, third respectively One end of bar 10 is connected and respectively can be relative to the second connecting rod 6 and third connecting rod 10 while mobile, the 4th connecting rod 8 Left end and middle-end be connected respectively with the other end of the other end of the second connecting rod 6, third connecting rod 10 and respectively can phase Mobile simultaneously for the second connecting rod 6 and third connecting rod 10, the right end of the 4th connecting rod 8 is fixedly installed the 5th axis connection 12, the 5th axis connection 12 is connected with camera 11 and camera 11 can be moved relative to the 5th axis connection 12;Wherein first Axis connection 3, the second axis connection 5, third axis connection 7, the 4th axis connection 9 are internally provided with rotatable motor, for adjusting it Angle between two or three components respectively connected;5th axis connection 12 is internally provided with rotatable motor, for revolving Turn camera 11 to realize moving left and right for its shooting visual angle and the visual field;It is adjusted by the joint of the above motor, to provide conjunction The shooting angle of reason;In the structure of Fig. 2, the rotatable motor being arranged by 3 inside of the first axis connection adjusts the first fixation Angle between part 2 and head rod 4, to perpendicular to wall plane, in terms of the visual angle to the right of left side, may be implemented The shooting visual field moves up and down;By being set inside the first axis connection 3, the second axis connection 5, third axis connection 7, the 4th axis connection 9 The joint operation for the rotatable motor set may make by head rod 4, the second connecting rod 6, third connecting rod the 10, the 4th The quadrilateral structure change in shape that connecting rod 8 is formed may be implemented to shoot so that presently shown visual angle is seen in fig. 2 The visual field moves left and right;In conjunction with the rotatable motor that 12 inside of the 5th axis connection above-mentioned is arranged, it is adjusted and rotation images First 11, with right side dotted line it is big it is round in vertical angle of view see, moving left and right for shooting visual angle and the visual field may be implemented.
3. the big data processing method according to claim 2 based on flow of the people, wherein:
In step 3, determine that the visual field and ambient light intensity in a frame image of the video of target area further comprise:
Step 31, determine that the effective coverage in the visual field in a frame image of the video of target area, the effective coverage are a frame The region other than luminaire, display equipment, roof or other impossible regions that people occur is excluded in the visual field of image;
Step 32, if the ratio that effective coverage accounts for the visual field in a frame image is less than preset threshold value, target area regards The setting in the visual field in one frame image of frequency is unsatisfactory for condition, then reacquires the specification for the above-mentioned parameter for being unsatisfactory for preset condition It is arranged and is sent to control module, the specification in the visual field is set as:According to Image Acquisition height, angle and direction and the shooting visual field Mapping table, search corresponding Image Acquisition height, angle and direction setting, so by adjusting axis connection using it is upper and lower, Left and right one or more of is moved forward and backward and to adjust Image Acquisition height, angle and direction;
Step 33, according to luminaire and/or the light transmitted by transparency material or vitrina, target area is determined Ambient light intensity in one frame image of video;
Step 34, if ambient light intensity is less than predetermined threshold value, the rule for the above-mentioned parameter for being unsatisfactory for preset condition are reacquired Model is arranged and is sent to control module, and the specification of the ambient light intensity is set as infrared image being used to carry out target following;It is no It is then continuing with visible images and carries out target following;
Step 35, if the above parameter all meets preset condition, 4 are entered step;Otherwise return to step 1 executes step 1 Operation.
4. the big data processing method according to claim 3 based on flow of the people, wherein:
In step 4, the video image for obtaining target area divides effective detection zone, wherein by the present frame of video image Image is divided into the grid of M × N, and each grid is an effective detection zone, and wherein M and N are greater than equal to 2 positive integers;Pass through Gridding image is arranged multiple virtual setting lines and judges to calculate, can optimize calculation, reduce the occupancy of process resource with Power consumption.
5. the big data processing method according to claim 4 based on flow of the people, wherein:
In steps of 5, it carries out edge detection and people's detection includes:
Step 51, smooth operation is executed using filter, obtains the convolution results of present image and filter;Difference Calculation is adjacent The gradient magnitude of pixel illustrates that the color change near the point is larger, it is believed that be edge when gradient is more than the threshold value of setting;
Step 52, target and background segment are opened using these edges, and the fritter for being less than certain threshold value for area is deleted And it is smoothed;
Step 53, according to each pixel Pix (x, y) of present image, the probability of each pixel is calculated:
Wherein, x, y are the coordinates of pixel Pix (x, y), and n is all pixelsMean value, δ2It is covariance function;If The probability P [Pix (x, y)] of pixel Pix (x, y) is less than threshold value, then the pixel is considered as the pixel of the first kind;Otherwise the picture Element is considered as the pixel of Second Type;When the pixel belongs to the pixel of Second Type, then need to update the pixel corresponding Pixel in background;
Step 54, background image, when the pixel is considered as the pixel of the first kind, background pixel Bg are updatedt+1(x, y)= Pix (x, y);When the pixel is considered as the pixel of Second Type, background pixel Bgt+1(x, y)=α × Pix (x, y)+(1- α)×Bgt(x, y), wherein t are the time, and t+1 indicates that next unit interval, α are update coefficients;
Step 55, the set of the pixel of the first kind and area-of-interest are taken into intersection, omits image border part, it is big in area Noise is filtered out in the region of threshold value;In the region that each area is more than threshold value, extraction pixel grey scale standard deviation is more than threshold value Region, obtain the second target area;
Step 56, the gradient map of the quantization for the second target area being calculated, and by according to being generated in training process The model of people to export the probability graph of gradient magnitude for the gradient map of each quantization;
Step 57, in the second target area, feature is extracted;The second target area is scanned using the rectangular window of fixed size, And the object in image is classified according to trained model and probability graph;Computing object is the probability of people:P=W (mo) * P (ma), wherein W (mo) is the corresponding weighted value of classification of the object obtained according to trained model, and P (ma) is wrapped by the object The summation of the probability graph of the identity element contained;If the probability that object is people is more than preset threshold value, there are people for preliminary judgement; Otherwise primarily determine that there is no people.
6. the big data processing method according to claim 4 or 5 based on flow of the people, wherein:
In step 6-8, detect people if primarily determined, further track people moving direction whether match it is existing with Track is arranged, if it does, then if whether judgement people across virtual setting line updates statistics mould across the virtual setting line Block increases its statistical counting, further comprises:It is virtually set based on the more line of number in the database of selection big data processing section Line is set, the virtual setting line in perpendicular direction is secondary virtual setting line;For preliminary true in main virtual setting line and image Fixed there are the objects of people, build the first straight line equation of main virtual setting line in the picture, and build current image frame and Primarily determined in next picture frame, in image there are the second equations of the straight line where the displacement of the object of people, and work as When the equation group that the two equations are constituted has solution, determines update statistical module, increase its statistical counting;If working as the two sides When the equation group that journey is constituted is without solution, for primarily determining that there are the objects of people in secondary virtual setting line and image, build secondary empty Propose and set the third linear equation of line in the picture, and when this second and third equation constitute equation group exist solution when, really Surely statistical module is updated, its statistical counting is increased, otherwise people is not present in final determine.
7. the big data processing method according to claim 6 based on flow of the people, wherein:
First fixing piece 2, head rod 4, the second connecting rod 6, third connecting rod 10, the 4th connecting rod 8 are internally provided with line Road, for connecting and controlling the first axis connection 3, the second axis connection 5, third axis connection 7, the 4th axis connection 9, the 5th axis connection 12 The rotatable motor of inside setting, and the wiring for passing on left wall interiors or the wall of the circuit inside the first fixing piece 2 The wiring of wall surface and be connected to control module.
8. the big data processing method according to claim 7 based on flow of the people, wherein:
10 equal length of second connecting rod 6 and third connecting rod, the third axis connection 7 on head rod 4 and the 4th connecting rod 8 The equal length of part between the 4th axis connection 9.
9. the big data processing method according to claim 8 based on flow of the people, wherein:
Second connecting rod 6 and third connecting rod 10 are one or more levels set stack structure, include sleeve per level-one intussusception structure And interior bar, the diameter of sleeve is more than the diameter of interior bar, and sleeve is sleeved on outside interior bar, and interior bar is comprised in sleeve Marginal portion is provided with hook, so that this grade covers stack structure has fixed maximum extension length in stretching, extension;Wherein in multistage In the case of covering stack structure, previous stage covers the interior bar of stack structure i.e. the sleeve of rear stage set stack structure.
10. the big data processing method according to claim 8 based on flow of the people, wherein:
Second connecting rod 6 and third connecting rod 10 include magnetostrictive member, and the one of the magnetostrictive member of the second connecting rod 6 End the first axis connection 3 of connection and other end connection third axis connection 7, one end of the magnetostrictive member of third connecting rod 10 It connects the second axis connection 5 and the other end connects the 4th axis connection 9, for second to be extended or shunk under the control of control command The magnetostrictive member of connecting rod 6 and third connecting rod 10, for effectively extending or shortening the second connecting rod 6 and third connecting rod 10 length, to realize on a large scale Forward or after move image capture module, further enhance flexibility.
CN201810067482.0A 2018-01-24 2018-01-24 Big data processing method based on pedestrian flow Active CN108280427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810067482.0A CN108280427B (en) 2018-01-24 2018-01-24 Big data processing method based on pedestrian flow

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810067482.0A CN108280427B (en) 2018-01-24 2018-01-24 Big data processing method based on pedestrian flow

Publications (2)

Publication Number Publication Date
CN108280427A true CN108280427A (en) 2018-07-13
CN108280427B CN108280427B (en) 2021-11-09

Family

ID=62804920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810067482.0A Active CN108280427B (en) 2018-01-24 2018-01-24 Big data processing method based on pedestrian flow

Country Status (1)

Country Link
CN (1) CN108280427B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815936A (en) * 2019-02-21 2019-05-28 深圳市商汤科技有限公司 A kind of target object analysis method and device, computer equipment and storage medium
CN110005974A (en) * 2018-10-29 2019-07-12 永康市道可道科技有限公司 Controllable type surface mounted light based on body density analysis
CN110865415A (en) * 2018-08-28 2020-03-06 浙江大华技术股份有限公司 Security inspection method and device
CN113709006A (en) * 2021-10-29 2021-11-26 上海闪马智能科技有限公司 Flow determination method and device, storage medium and electronic device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1792572A (en) * 2005-11-11 2006-06-28 北京航空航天大学 Three-freedom dynamic sensing interexchanging apparatus
CN101320427A (en) * 2008-07-01 2008-12-10 北京中星微电子有限公司 Video monitoring method and system with auxiliary objective monitoring function
CN101877058A (en) * 2010-02-10 2010-11-03 杭州海康威视软件有限公司 People flow rate statistical method and system
CN105303191A (en) * 2014-07-25 2016-02-03 中兴通讯股份有限公司 Method and apparatus for counting pedestrians in foresight monitoring scene
US20170017846A1 (en) * 2015-07-15 2017-01-19 Umm Al-Qura University Crowd and traffic monitoring apparatus and method
CN106446788A (en) * 2016-08-31 2017-02-22 山东恒宇电子有限公司 Method for passenger flow statistic by means of high-dynamic range image based on optic nerve mechanism
BR102012007224A2 (en) * 2012-03-30 2017-04-25 Veltec Soluções Tecnologicas Sa urban and road collective vehicle control and management system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1792572A (en) * 2005-11-11 2006-06-28 北京航空航天大学 Three-freedom dynamic sensing interexchanging apparatus
CN101320427A (en) * 2008-07-01 2008-12-10 北京中星微电子有限公司 Video monitoring method and system with auxiliary objective monitoring function
CN101877058A (en) * 2010-02-10 2010-11-03 杭州海康威视软件有限公司 People flow rate statistical method and system
BR102012007224A2 (en) * 2012-03-30 2017-04-25 Veltec Soluções Tecnologicas Sa urban and road collective vehicle control and management system
CN105303191A (en) * 2014-07-25 2016-02-03 中兴通讯股份有限公司 Method and apparatus for counting pedestrians in foresight monitoring scene
US20170017846A1 (en) * 2015-07-15 2017-01-19 Umm Al-Qura University Crowd and traffic monitoring apparatus and method
CN106446788A (en) * 2016-08-31 2017-02-22 山东恒宇电子有限公司 Method for passenger flow statistic by means of high-dynamic range image based on optic nerve mechanism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LEE,GWANG-GOOK 等: "A Statistical Method for Counting Pedestrians in Crowded Environments", 《IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS》 *
李新江: "面向视频监控的自动人数统计研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110865415A (en) * 2018-08-28 2020-03-06 浙江大华技术股份有限公司 Security inspection method and device
CN110865415B (en) * 2018-08-28 2024-03-22 浙江大华技术股份有限公司 Security check method and device
CN110005974A (en) * 2018-10-29 2019-07-12 永康市道可道科技有限公司 Controllable type surface mounted light based on body density analysis
CN110005974B (en) * 2018-10-29 2021-01-26 中画高新技术产业发展(重庆)有限公司 Controllable surface-mounted lamp based on human body density analysis
CN109815936A (en) * 2019-02-21 2019-05-28 深圳市商汤科技有限公司 A kind of target object analysis method and device, computer equipment and storage medium
CN109815936B (en) * 2019-02-21 2023-08-22 深圳市商汤科技有限公司 Target object analysis method and device, computer equipment and storage medium
CN113709006A (en) * 2021-10-29 2021-11-26 上海闪马智能科技有限公司 Flow determination method and device, storage medium and electronic device

Also Published As

Publication number Publication date
CN108280427B (en) 2021-11-09

Similar Documents

Publication Publication Date Title
WO2021088300A1 (en) Rgb-d multi-mode fusion personnel detection method based on asymmetric double-stream network
CN104463117B (en) A kind of recognition of face sample collection method and system based on video mode
CN108280427A (en) A kind of big data processing method based on flow of the people
CN104166861B (en) A kind of pedestrian detection method
CN104123544B (en) Anomaly detection method and system based on video analysis
CN102201146B (en) Active infrared video based fire smoke detection method in zero-illumination environment
CN103605971B (en) Method and device for capturing face images
CN107301378A (en) The pedestrian detection method and system of Multi-classifers integrated in image
CN107220603A (en) Vehicle checking method and device based on deep learning
CN102819764A (en) Method for counting pedestrian flow from multiple views under complex scene of traffic junction
CN110188807A (en) Tunnel pedestrian target detection method based on cascade super-resolution network and improvement Faster R-CNN
CN103310444B (en) A kind of method of the monitoring people counting based on overhead camera head
CN106778645A (en) A kind of image processing method and device
CN103927520A (en) Method for detecting human face under backlighting environment
CN103268470B (en) Object video real-time statistical method based on any scene
CN104036323A (en) Vehicle detection method based on convolutional neural network
CN103853724B (en) multimedia data classification method and device
CN105303191A (en) Method and apparatus for counting pedestrians in foresight monitoring scene
CN102903124A (en) Moving object detection method
CN103632427B (en) A kind of gate cracking protection method and gate control system
CN106210634A (en) A kind of wisdom gold eyeball identification personnel fall down to the ground alarm method and device
Xu et al. Real-time pedestrian detection based on edge factor and Histogram of Oriented Gradient
CN108345842A (en) A kind of processing method based on big data
CN112287827A (en) Complex environment pedestrian mask wearing detection method and system based on intelligent lamp pole
CN109147351A (en) A kind of traffic light control system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211008

Address after: 510535 No. 2 Ruitai Road, Huangpu District, Guangzhou, Guangdong

Applicant after: Guangzhou gaimengda Industrial Products Co.,Ltd.

Address before: 610000 Sichuan Province Chengdu High-tech Zone Tianfu Avenue Middle Section 1388 Building 7 Floor 772

Applicant before: CHENGDU DINGZHIHUI TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 510000 Ruitai Road, Huangpu District, Guangzhou, Guangdong

Patentee after: Guangzhou All Things Collection Industrial Internet Technology Co.,Ltd.

Address before: 510535 No. 2 Ruitai Road, Huangpu District, Guangzhou, Guangdong

Patentee before: Guangzhou gaimengda Industrial Products Co.,Ltd.

CP03 Change of name, title or address