CN106204619B - A kind of target object distribution density determines method and device - Google Patents

A kind of target object distribution density determines method and device Download PDF

Info

Publication number
CN106204619B
CN106204619B CN201610580912.XA CN201610580912A CN106204619B CN 106204619 B CN106204619 B CN 106204619B CN 201610580912 A CN201610580912 A CN 201610580912A CN 106204619 B CN106204619 B CN 106204619B
Authority
CN
China
Prior art keywords
target object
pixel
sample image
image
characteristic information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610580912.XA
Other languages
Chinese (zh)
Other versions
CN106204619A (en
Inventor
金伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201610580912.XA priority Critical patent/CN106204619B/en
Publication of CN106204619A publication Critical patent/CN106204619A/en
Application granted granted Critical
Publication of CN106204619B publication Critical patent/CN106204619B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of target object distribution densities to determine method and device, this method comprises: choosing the corresponding sample image of each shooting visual angle respectively in the non-uncalibrated image of multiple groups of monitoring scene shooting;Distinguished in each sample image appearance each target object of same class and corresponding background, and determine the characteristic information for each pixel that the location information of such each target object and corresponding background and each sample image are included respectively;Characteristic information and such each target object and the location information of corresponding background to each pixel in each sample image carry out ridge regression fitting, determine that each sample image is directed to the mapping relations of such target object;According to the characteristic information of each pixel in the non-uncalibrated image of others of the mapping relations and shooting visual angle identical as each sample image determined, the distribution density of such target object in each non-uncalibrated image is determined.Method provided by the invention only need to carry out ridge regression fitting to sample image, improve calculating speed.

Description

A kind of target object distribution density determines method and device
Technical field
The present invention relates to technical field of image processing, espespecially a kind of target object distribution density determines method and device.
Background technique
All it is the texture of simple analysis image for general object count problem, distinguishes foreground and background.It will exactly take the photograph Camera is fixed at some commanding elevation, and alignment lens road is shot, and carries out a series of image procossing to the image of acquisition, Convolutional calculation is carried out with template, is then matched.
But the calculation amount of existing target object density estimation method is bigger, thus it is time-consuming long, it is unable to reach in real time The requirement of monitoring.
Summary of the invention
The embodiment of the present invention provides a kind of target object distribution density and determines method and device, to solve in the prior art When the distribution density of existing determining target object, calculating process take a long time cannot achieve to the distribution density of target object into The problem of row real time monitoring.
The embodiment of the invention provides a kind of target object distribution densities to determine method, comprising:
The corresponding sample image of each shooting visual angle is chosen respectively in the non-uncalibrated image of multiple groups of monitoring scene shooting;
Distinguished in each sample image appearance each target object of same class and corresponding background, and respectively determine should Each pixel included in each target object of class and the location information and each sample image of corresponding background Characteristic information;
Characteristic information and such each described target object and correspondence to each pixel in each sample image Background location information carry out ridge regression fitting, determine each sample image be directed to such target object mapping relations;
According to each sample image determined be directed to such target object mapping relations, and with each sample The characteristic information of each pixel in other non-uncalibrated images of the identical shooting visual angle of image determines each described do not demarcate The distribution density of such target object in image.
In one possible implementation, each described according to what is determined in method provided in an embodiment of the present invention Sample image is directed to the mapping relations of such target object, and the shooting visual angle identical as each sample image other Non- uncalibrated image in each pixel characteristic information, determine that the distribution of such target object in each non-uncalibrated image is close Degree, specifically includes:
Such target pair is directed to according to the characteristic information of each pixel in the non-uncalibrated image and the sample image The mapping relations of elephant determine the score value of each pixel in the non-uncalibrated image;
According to such predetermined target object and the reference score value section of corresponding background and the non-uncalibrated image In each pixel score value, determine whether each pixel belongs to such target object in the non-uncalibrated image;
According to the profile that each pixel for belonging to such target object forms, the classification in the non-uncalibrated image is determined Mark the distribution density of object.
In one possible implementation, each described according to what is determined in method provided in an embodiment of the present invention Sample image is directed to the mapping relations of such target object, and the shooting visual angle identical as each sample image other Non- uncalibrated image in each pixel characteristic information, determine that the distribution of such target object in each non-uncalibrated image is close Degree, specifically includes:
The non-uncalibrated image is divided into multiple regions;
According to the characteristic information of each pixel in each region and the sample image for such target object Mapping relations determine the score value of each pixel in each region;
According to such predetermined target object and each picture in the reference score value section of corresponding background and the region The score value of vegetarian refreshments determines whether each pixel belongs to such target object in each region;
According to the profile that each pixel for belonging to such target object forms, such target pair in each region is determined The distribution density of elephant;
According to the distribution density of such target object in each region, such target pair in the non-uncalibrated image is determined The distribution density of elephant.
In one possible implementation, in method provided in an embodiment of the present invention, such each institute is being determined respectively When stating the location information of target object and corresponding background, further includes:
Gaussian filtering process is carried out to the location information of such each target object and corresponding background respectively.
In one possible implementation, in method provided in an embodiment of the present invention, each sample is being determined respectively Included in this image when the characteristic information of each pixel, further includes:
Clustering processing is carried out to the characteristic information of each pixel.
In one possible implementation, described to each sample graph in method provided in an embodiment of the present invention The characteristic information of each pixel and such each target object and the location information of corresponding background carry out ridge as in When regression fit, specifically include:
To in each sample image each pixel using characteristic information after gaussian filtering process and corresponding Ridge regression fitting is carried out using such each target object after gaussian filtering process and the location information of corresponding background.
In one possible implementation, in method provided in an embodiment of the present invention, the characteristic information is to choose The characteristic information that key data in Scale invariant features transform feature is constituted.
Based on the same inventive concept, the embodiment of the invention also provides a kind of target object distribution density determining device, packets It includes:
Sample image chooses module, for choosing each shooting view respectively in the non-uncalibrated image of multiple groups that monitoring scene is shot The corresponding sample image in angle;
Information determination module, for distinguishing each target object of same class of appearance and corresponding in each sample image Background, and determined in the location information and each sample image of such each target object and corresponding background respectively The characteristic information for each pixel for being included;
Fitting module, for each pixel in each sample image characteristic information and such is each described Target object and the location information of corresponding background carry out ridge regression fitting, determine each sample image for such target pair The mapping relations of elephant;
Density determining module is closed for the mapping according to each sample image determined for such target object The feature of each pixel is believed in system, and the non-uncalibrated images of others of the shooting visual angle identical as each sample image Breath determines the distribution density of such target object in each non-uncalibrated image.
In one possible implementation, in device provided in an embodiment of the present invention, the density determining module tool Body is used for:
Such target pair is directed to according to the characteristic information of each pixel in the non-uncalibrated image and the sample image The mapping relations of elephant determine the score value of each pixel in the non-uncalibrated image;
According to such predetermined target object and the reference score value section of corresponding background and the non-uncalibrated image In each pixel score value, determine whether each pixel belongs to such target object in the non-uncalibrated image;
According to the profile that each pixel for belonging to such target object forms, the classification in the non-uncalibrated image is determined Mark the distribution density of object.
In one possible implementation, in device provided in an embodiment of the present invention, the density determining module tool Body is used for:
The non-uncalibrated image is divided into multiple regions;
According to the characteristic information of each pixel in each region and the sample image for such target object Mapping relations determine the score value of each pixel in each region;
According to such predetermined target object and each picture in the reference score value section of corresponding background and the region The score value of vegetarian refreshments determines whether each pixel belongs to such target object in each region;
According to the profile that each pixel for belonging to such target object forms, such target pair in each region is determined The distribution density of elephant;
According to the distribution density of such target object in each region, such target pair in the non-uncalibrated image is determined The distribution density of elephant.
In one possible implementation, in device provided in an embodiment of the present invention, described device further include:
Module is filtered, is carried out for the location information respectively to such each target object and corresponding background high This filtering processing.
In one possible implementation, in device provided in an embodiment of the present invention, described device further include:
Clustering processing module carries out clustering processing for the characteristic information to each pixel.
In one possible implementation, in device provided in an embodiment of the present invention, the fitting module is specifically used In:
To in each sample image each pixel using characteristic information after gaussian filtering process and corresponding Ridge regression fitting is carried out using such each target object after gaussian filtering process and the location information of corresponding background.
In one possible implementation, in device provided in an embodiment of the present invention, the characteristic information is to choose The characteristic information that key data in Scale invariant features transform feature is constituted.
The present invention has the beneficial effect that:
A kind of target object distribution density provided in an embodiment of the present invention determines method and device, by each sample image It is middle to distinguish each target object of same class occurred and corresponding background, and such each target object and corresponding background are determined respectively Location information and each sample image included in each pixel characteristic information, to each pixel in each sample image Characteristic information and such each target object and the location information of corresponding background carry out ridge regression fitting, determine each sample Image is directed to the mapping relations of such target object, and the mapping according to each sample image determined for such target object is closed The characteristic information of each pixel in system, and the non-uncalibrated image of others of shooting visual angle identical as each sample image, determines each The distribution density of such target object in non-uncalibrated image.Method provided in an embodiment of the present invention is identical for shooting visual angle Multiple non-uncalibrated images, it is first determined the mapping relations of such target object are directed in sample image, other are multiple not to mark The distribution density of such target object can be determined by the mapping relations by determining image, avoided and carried out to each image Fitting, improves calculating speed, in addition, further improving calculating speed using ridge regression fitting, meets real time monitoring It is required that.
Detailed description of the invention
Fig. 1 is that a kind of target object distribution density provided in an embodiment of the present invention determines one of flow chart of method;
Fig. 2 is the two of the flow chart that a kind of target object distribution density provided in an embodiment of the present invention determines method;
Fig. 3 is the three of the flow chart that a kind of target object distribution density provided in an embodiment of the present invention determines method;
Fig. 4 is a kind of structural schematic diagram of target object distribution density determining device provided in an embodiment of the present invention.
Specific embodiment
When for determining target object distribution density existing in the prior art, due to calculating process takes a long time can not be real The problem of now distribution density of target object is monitored in real time.
The embodiment of the invention provides a kind of target object distribution densities to determine method, as shown in Figure 1, specifically including following Step:
S101, the corresponding sample graph of each shooting visual angle is chosen respectively in the non-uncalibrated image of multiple groups of monitoring scene shooting Picture;
S102, each target object of same class that appearance is distinguished in each sample image and corresponding background, and determine respectively The feature of each pixel included in such each target object and the location information and each sample image of corresponding background is believed Breath;
S103, to the characteristic information of each pixel in each sample image and such each target object and corresponding background Location information carry out ridge regression fitting, determine each sample image be directed to such target object mapping relations;
S104, the mapping relations that such target object is directed to according to each sample image for determining, and with each sample graph As identical shooting visual angle the non-uncalibrated images of others in each pixel characteristic information, determine such in each non-uncalibrated image The distribution density of target object.
Method provided in an embodiment of the present invention, by first determining that each sample image is closed for the mapping of a certain class target object System, further according to each pixel in the non-uncalibrated image of others of the mapping relations and shooting visual angle identical as each sample image Characteristic information, determines the distribution density of such target object in each non-uncalibrated image, it is identical for shooting visual angle it is multiple not Uncalibrated image determines the distribution density of such target object using identical mapping relations, avoid to each image into Row fitting, improves calculating speed, in addition, further improving calculating speed using ridge regression fitting, meets real time monitoring Requirement.
The specific implementation of above steps is described in detail below.
In the above-mentioned steps S101 that embodiment of the present invention provides, the multiple series of images of monitoring scene shooting can be video camera The video frame of shooting is also possible to the picture of camera shooting, herein without limitation.
In the above-mentioned steps S102 that embodiment of the present invention provides, each mesh of same class of appearance is distinguished in each sample image Object and corresponding background are marked, for example, someone, vehicle, high building and trees etc. in photographed scene, if choosing people as target Object then demarcates all people in sample image, the side that can be framed the people in sample image using rectangle frame Formula is demarcated, naturally it is also possible to be demarcated with others label or other figures, herein without limitation.If selected people makees For target object, then other figures are trees, high building etc. in background, such as sample image in sample image.According to The mode of rectangle frame demarcates target object, then can choose the coordinate of two opposite apex angles in rectangle frame as each mesh The location information of mark object then can directly assign a reality since the position for not being such target object is the position of background Numerical value chooses the location information only preferred implementation of target object and corresponding background as the location information of background herein Mode in the specific implementation can also be using other modes, such as choose the side such as the midpoint coordinates on four sides in rectangle frame Formula, herein without limitation.
Specifically, embodiment of the present invention provide above-mentioned steps S102 in, determine respectively such each target object and When the location information of corresponding background, can also include:
Gaussian filtering process is carried out to the location information of such each target object and corresponding background respectively.
Gaussian filtering is a kind of linear smoothing filtering, can remove data unreasonable in data, for example compare in data Lofty point respectively handles the location information of each target object and corresponding background using gaussian filtering, can remove Unreasonable data in location information, to keep subsequent fit procedure more accurate.
Specifically, it in the above-mentioned steps S102 that embodiment of the present invention provides, is wrapped in determining each sample image respectively When the characteristic information of each pixel contained, can also include:
Clustering processing is carried out to the characteristic information of each pixel.
Clustering processing is carried out by the characteristic information to each pixel, can prevent over-fitting occur in subsequent fit procedure Or poor fitting can use K mean cluster (K-means clustering), hierarchical clustering etc., herein not in the specific implementation The concrete mode of cluster is defined.
Specifically, it in the above-mentioned steps S103 that embodiment of the present invention provides, can specifically include:
To in each sample image each pixel using after gaussian filtering process characteristic information and it is corresponding use Gauss Such each target object after filtering processing and the location information of corresponding background carry out ridge regression fitting.
Using gaussian filtering to the position of the characteristic information of each pixel and such each target object and corresponding background Information is handled, and the time complexity in fit procedure can be reduced, to improve the speed of fit procedure.
Ridge regression is fitted a kind of Biased estimator homing method for being exclusively used in the analysis of synteny data, which has lost unbiased Property, to exchange high numerical stability for, to obtain higher computational accuracy.
Specifically, the above-mentioned steps S104 that embodiment of the present invention provides can be specifically included:
The mapping of such target object is directed to according to the characteristic information of pixel each in non-uncalibrated image and sample image Relationship determines the score value of each pixel in non-uncalibrated image;
It is each with the reference score value section of corresponding background and non-uncalibrated image according to such predetermined target object The score value of pixel determines whether each pixel belongs to such target object in non-uncalibrated image;
According to the profile that each pixel for belonging to such target object forms, such target pair in non-uncalibrated image is determined The distribution density of elephant.
The score value of above-mentioned non-uncalibrated image can be by the characteristic information of each pixel in non-uncalibrated image multiplied by sample Image is obtained for the corresponding mapping coefficient of mapping relations of such target object, by judging that score value belongs to such target pair The score value section answered still falls within the score value section of background to judge whether the pixel belongs to such target object.Pass through analysis The profile of each pixel composition of such target object, can determine the accurate distribution density of such target object.
In practical application, the position of video camera is generally relatively high, video camera distance objective object is distant, in image In the size of each target object be not much different, it is possible to the size for passing through profile analyzes the number of target object.
In the specific implementation, the wheel of each pixel composition of frame can be removed by using a certain size rectangle frame or circular frame Wide mode is counted, and the size of rectangle frame or circular frame can be set according to the size of target object in sample image, this Place is illustrated with rectangle frame or circular frame, is not defined to its shape, for example, being human body for target object It says, the profile of frame each pixel composition can be removed by a certain size circular frame, if the profile of several pixels composition and should Circular frame size is similar, then can regard as the position is a human body, can be according to wheel if there is the folded situation of two people's weight Wide size judges it is several human bodies, for example, the width of the profile and one and half circular frame size it is similar, then can regard as The position is two human bodies, can also determine number by a similar method in the case of folding to personal weight, herein no longer It repeats.
Further, the above-mentioned steps S104 that embodiment of the present invention provides can be specifically included:
Non- uncalibrated image is divided into multiple regions;
The mapping relations of such target object are directed to according to the characteristic information of pixel each in each region and sample image, Determine the score value of each pixel in each region;
According to such predetermined target object and each pixel in the reference score value section of corresponding background and region Score value, determine whether each pixel belongs to such target object in each region;
According to the profile that each pixel for belonging to such target object forms, such target object in each region is determined Distribution density;
According to the distribution density of such target object in each region, the distribution of such target object in non-uncalibrated image is determined Density.
The non-uncalibrated image bigger for target object distribution density, due to such target in each image of same view angle The mapping relations of object are identical, it is possible to which, by the way that non-uncalibrated image is divided into multiple regions, determining respectively should in each region The distribution density of class target object, so that it is determined that in the non-uncalibrated image such target object distribution density.It in this way can be into The accuracy for the distribution density that one step improves.
Specifically, the features described above information that embodiment of the present invention provides is to choose Scale invariant features transform feature The characteristic information that key data in (scale invariant feature transform, SIFT) is constituted.SIFT feature tool There are the invariance to the variation of deformation, illumination and contrast, and energy retaining space information, is the feature of 128 data composition Value, and the characteristic information that embodiment of the present invention provides is the improved Scale invariant features transform feature (visual of vision Improment scale invariant feature transform, VISIFT), it is the main number chosen in SIFT feature According to the characteristic information of composition, the characteristic information being made of 98 or 96 data.By being improved to SIFT feature, go Fall unnecessary data in SIFT feature and only chooses key data, it is possible to reduce the calculation amount of subsequent fit procedure, and make to intend It is more accurate to close result.
Below by by target object be human body for, to a kind of target object distribution density provided in an embodiment of the present invention The method of determination is described in detail, as shown in Figure 2:
S201, the corresponding sample graph of each shooting visual angle is chosen respectively in the non-uncalibrated image of multiple groups of monitoring scene shooting Picture;
S202, each human body that appearance is distinguished in each sample image and corresponding background, and each human body and right is determined respectively The location information for the background answered and gaussian filtering process is carried out to the location information of each human body and corresponding background;
S203, the characteristic information for determining each pixel included in each sample image, and the feature of each pixel is believed Breath carries out clustering processing;
S204, in each sample image each pixel using after gaussian filtering process characteristic information and corresponding use Such each target object after gaussian filtering process and the location information of corresponding background carry out ridge regression fitting, determine each sample Image is directed to the mapping relations of human body;
S205, the mapping relations that human body is directed to according to each sample image determined, and bat identical as each sample image The characteristic information for taking the photograph each pixel in the non-uncalibrated image of others at visual angle, determines the score of each pixel in non-uncalibrated image Value;
S206, according to such predetermined target object and the reference score value section of corresponding background and non-uncalibrated image In each pixel score value, determine whether each pixel belongs to such target object in non-uncalibrated image;
S207, according to belong to such target object each pixel composition profile, determine in non-uncalibrated image such The distribution density of target object.
Specifically, it in above-mentioned steps S202, can specifically include:
Human body is outlined rectangle frame in each sample image and distinguishes human body and corresponding background come by way of, respectively One group of diagonal coordinate for determining the corresponding rectangle frame of each human body can pass through the side of indirect assignment for the location information of background Formula obtains, and extracts the characteristic information of each pixel included in each sample image.
Specifically, it in above-mentioned steps S204, can specifically include:
Characteristic information and each human body and the location information of corresponding background to each pixel in each sample image is using public Formula (1) carries out ridge regression fitting, by finding the covariance coefficient of one group of k dimension come so that the value of formula (1) is minimum;
||Xw-Y||2+λ||w||2→min(w) (1)
Wherein, X indicates that the characteristic information of corresponding each pixel, Y indicate the location information of each human body and corresponding background, w For mapping coefficient, the coefficient of balance of the λ between control forecasting mistake and regularization.
In order to reduce the complexity of time, using gaussian filtering to the characteristic information and human body and correspondence of each pixel The location information of background be smoothed, specifically handled using formula (2):
||G*(Wx-Y)||2+λ||w||2→min(w) (2)
Wherein, G* indicates to carry out gaussian filtering in desired deviation, ensure that part is unbiased.
Due to convolution be it is linear, by above-mentioned formula (2) available formula (3):
||(G*X)w-G*Y)||2+λ||w||2→min(w) (3)
Wherein, (G*X) indicates to carry out each column of X gaussian filtering, and G*Y is to indicate to carry out Gauss filter to location information Wave, λ | | w | |2It can be expressed as the gaussian sum at mark human body center;
Use Xs=[G*X] and Ys=[G*Y] replaces former formula, obtains mapping coefficient w, i.e. formula (4):
Wherein, Γ indicates that bias matrix, T indicate to carry out transposition operation to matrix.
The method that embodiment of the present invention provides can also distinguish between all kinds of target objects, behave below by with target object For body and vehicle, method, which is described in detail, to be determined to a kind of target object distribution density provided in an embodiment of the present invention, such as Shown in Fig. 3:
S301, the corresponding sample graph of each shooting visual angle is chosen respectively in the non-uncalibrated image of multiple groups of monitoring scene shooting Picture;
S302, each human body that appearance is distinguished in each sample image and corresponding background, and each human body and right is determined respectively The characteristic information of each pixel included in the location information for the background answered and each sample image;
S303, to the characteristic information of each pixel in each sample image and the location information of human body and corresponding background Ridge regression fitting is carried out, determines the mapping relations of human body in each sample image;
S304, the mapping relations that human body is directed to according to each sample image determined, and bat identical as each sample image The characteristic information for taking the photograph each pixel in the non-uncalibrated image of others at visual angle, determines the score of each pixel in non-uncalibrated image Value;
S305, according to predetermined human body and each pixel in the reference score value section of corresponding background and non-uncalibrated image The score value of point determines whether each pixel belongs to human body in non-uncalibrated image;
S306, it is directed to vehicle, determines whether each pixel belongs to vehicle in non-uncalibrated image according to step S302 to S305;
S307, basis belong to human body and belong to the profile of each pixel composition of vehicle, determine in non-uncalibrated image respectively The distribution density of vehicle and human body.
Above by by target object be for human body and vehicle carry out, to illustrate that method provided in an embodiment of the present invention can To distinguish all kinds of target objects, if distinguishing multi-class targets object, can be handled according to similar method, details are not described herein again.
Based on the same inventive concept, the embodiment of the invention also provides a kind of target object distribution density determining devices, such as Shown in Fig. 4, comprising:
Sample image chooses module 401, for choosing each bat respectively in the non-uncalibrated image of multiple groups that monitoring scene is shot Take the photograph the corresponding sample image in visual angle;
Information determination module 402, each target object of same class for distinguishing appearance in each sample image and corresponding Background, and determined included in such each target object and the location information and each sample image of corresponding background respectively The characteristic information of each pixel;
Fitting module 403, for each pixel in each sample image characteristic information and such each target object and The location information of corresponding background carries out ridge regression fitting, determines that each sample image is closed for the mapping of such target object System;
Density determining module 404, for being directed to the mapping relations of such target object according to each sample image determined, And in other non-uncalibrated images of shooting visual angle identical as each sample image each pixel characteristic information, determine it is each not The distribution density of such target object in uncalibrated image.
Specifically, above-mentioned density determining module 404 can be specifically used for:
The mapping of such target object is directed to according to the characteristic information of pixel each in non-uncalibrated image and sample image Relationship determines the score value of each pixel in non-uncalibrated image;
It is each with the reference score value section of corresponding background and non-uncalibrated image according to such predetermined target object The score value of pixel determines whether each pixel belongs to such target object in non-uncalibrated image;
According to the profile that each pixel for belonging to such target object forms, such target pair in non-uncalibrated image is determined The distribution density of elephant.
Further, density determining module 404 can be specifically used for:
Non- uncalibrated image is divided into multiple regions;
The mapping of such target object is directed to according to the characteristic information of pixel each in each region and the sample image Relationship determines the score value of each pixel in each region;
According to such predetermined target object and each pixel in the reference score value section of corresponding background and region Score value, determine whether each pixel belongs to such target object in each region;
According to the profile that each pixel for belonging to such target object forms, such target object in each region is determined Distribution density;
According to the distribution density of such target object in each region, the distribution of such target object in non-uncalibrated image is determined Density.
Further, above-mentioned apparatus can also include:
Module is filtered, carries out Gauss filter for the location information respectively to such each target object and corresponding background Wave processing.
Further, device can also include:
Clustering processing module carries out clustering processing for the characteristic information to each pixel.
Specifically, fitting module 403 can be specifically used for:
To in each sample image each pixel using after gaussian filtering process characteristic information and it is corresponding use Gauss Such each target object after filtering processing and the location information of corresponding background carry out ridge regression fitting.
Specifically, the feature that characteristic information can be constituted for the key data chosen in Scale invariant features transform feature is believed Breath.
The principle solved the problems, such as due to the device determines that method is similar to a kind of aforementioned target object distribution density, should The implementation of device may refer to the implementation of method, and overlaps will not be repeated.
A kind of target object distribution density provided in an embodiment of the present invention determines method and device, by first determining each sample Image is directed to the mapping relations of a certain class target object, further according to the mapping relations and shooting visual angle identical as each sample image The non-uncalibrated images of others in each pixel characteristic information, determine the distribution of such target object in each non-uncalibrated image Density, multiple non-uncalibrated images identical for shooting visual angle determine point of such target object using identical mapping relations Cloth density avoids and is fitted to each image, improves calculating speed, in addition, being fitted using ridge regression further Calculating speed is improved, the requirement of real time monitoring is met.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to include these modifications and variations.

Claims (12)

1. a kind of target object distribution density determines method characterized by comprising
The corresponding sample image of each shooting visual angle is chosen respectively in the non-uncalibrated image of multiple groups of monitoring scene shooting;
Distinguished in each sample image appearance each target object of same class and corresponding background, and determine that such is each respectively The feature of each pixel included in the target object and the location information and each sample image of corresponding background Information;
Characteristic information and such each target object and corresponding back to each pixel in each sample image The location information of scape carries out ridge regression fitting, determines that each sample image is directed to the mapping relations of such target object;
According to each sample image determined be directed to such target object mapping relations, and with each sample image The characteristic information of each pixel in the non-uncalibrated image of others of the identical shooting visual angle, determines in each non-uncalibrated image Such target object distribution density;
Wherein, according to each sample image determined be directed to such target object mapping relations, and with each sample The characteristic information of each pixel in the non-uncalibrated images of others of the identical shooting visual angle of this image determines each described do not demarcate The distribution density of such target object in image, specifically includes:
According to the characteristic information of each pixel in the non-uncalibrated image and the sample image for such target object Mapping relations determine the score value of each pixel in the non-uncalibrated image;
It is each with the reference score value section of corresponding background and the non-uncalibrated image according to such predetermined target object The score value of pixel determines whether each pixel belongs to such target object in the non-uncalibrated image;
According to the profile that each pixel for belonging to such target object forms, such target pair in the non-uncalibrated image is determined The distribution density of elephant.
2. the method as described in claim 1, which is characterized in that be directed to such target according to each sample image determined Each pixel in the mapping relations of object, and the non-uncalibrated images of others of the shooting visual angle identical as each sample image The characteristic information of point determines the distribution density of such target object in each non-uncalibrated image, specifically includes:
The non-uncalibrated image is divided into multiple regions;
The mapping of such target object is directed to according to the characteristic information of each pixel in each region and the sample image Relationship determines the score value of each pixel in each region;
According to such predetermined target object and each pixel in the reference score value section of corresponding background and the region Score value, determine whether each pixel belongs to such target object in each region;
According to the profile that each pixel for belonging to such target object forms, such target object in each region is determined Distribution density;
According to the distribution density of such target object in each region, such target object in the non-uncalibrated image is determined Distribution density.
3. the method as described in claim 1, which is characterized in that determining such each target object and corresponding back respectively When the location information of scape, further includes:
Gaussian filtering process is carried out to the location information of such each target object and corresponding background respectively.
4. the method as described in claim 1, which is characterized in that determining each picture included in each sample image respectively When the characteristic information of vegetarian refreshments, further includes:
Clustering processing is carried out to the characteristic information of each pixel.
5. the method as described in claim 1, which is characterized in that the spy to each pixel in each sample image When reference breath and such each target object and the location information of corresponding background carry out ridge regression fitting, specifically include:
To in each sample image each pixel using after gaussian filtering process characteristic information and corresponding use Such each target object after gaussian filtering process and the location information of corresponding background carry out ridge regression fitting.
6. the method according to claim 1 to 5, which is characterized in that the characteristic information is to choose scale invariant feature The characteristic information that key data in transform characteristics is constituted.
7. a kind of target object distribution density determining device characterized by comprising
Sample image chooses module, for choosing each shooting visual angle pair respectively in the non-uncalibrated image of multiple groups that monitoring scene is shot The sample image answered;
Information determination module, for distinguishing each target object of same class and corresponding back of appearance in each sample image Scape, and institute in the location information and each sample image of such each target object and corresponding background is determined respectively The characteristic information for each pixel for including;
Fitting module, for the characteristic information and such each described target to each pixel in each sample image Object and the location information of corresponding background carry out ridge regression fitting, determine each sample image for such target object Mapping relations;
Density determining module, for being directed to the mapping relations of such target object according to each sample image determined, with And in the non-uncalibrated images of others of the shooting visual angle identical as each sample image each pixel characteristic information, determine The distribution density of such target object in each non-uncalibrated image;
Wherein, the density determining module is specifically used for:
According to the characteristic information of each pixel in the non-uncalibrated image and the sample image for such target object Mapping relations determine the score value of each pixel in the non-uncalibrated image;
It is each with the reference score value section of corresponding background and the non-uncalibrated image according to such predetermined target object The score value of pixel determines whether each pixel belongs to such target object in the non-uncalibrated image;
According to the profile that each pixel for belonging to such target object forms, such target pair in the non-uncalibrated image is determined The distribution density of elephant.
8. device as claimed in claim 7, which is characterized in that the density determining module is specifically used for:
The non-uncalibrated image is divided into multiple regions;
The mapping of such target object is directed to according to the characteristic information of each pixel in each region and the sample image Relationship determines the score value of each pixel in each region;
According to such predetermined target object and each pixel in the reference score value section of corresponding background and the region Score value, determine whether each pixel belongs to such target object in each region;
According to the profile that each pixel for belonging to such target object forms, such target object in each region is determined Distribution density;
According to the distribution density of such target object in each region, such target object in the non-uncalibrated image is determined Distribution density.
9. device as claimed in claim 7, which is characterized in that described device further include:
Module is filtered, carries out Gauss filter for the location information respectively to such each target object and corresponding background Wave processing.
10. device as claimed in claim 7, which is characterized in that described device further include:
Clustering processing module carries out clustering processing for the characteristic information to each pixel.
11. device as claimed in claim 7, which is characterized in that the fitting module is specifically used for:
To in each sample image each pixel using after gaussian filtering process characteristic information and corresponding use Such each target object after gaussian filtering process and the location information of corresponding background carry out ridge regression fitting.
12. such as the described in any item devices of claim 7-11, which is characterized in that the characteristic information is to choose Scale invariant spy Levy the characteristic information that the key data in transform characteristics is constituted.
CN201610580912.XA 2016-07-21 2016-07-21 A kind of target object distribution density determines method and device Active CN106204619B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610580912.XA CN106204619B (en) 2016-07-21 2016-07-21 A kind of target object distribution density determines method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610580912.XA CN106204619B (en) 2016-07-21 2016-07-21 A kind of target object distribution density determines method and device

Publications (2)

Publication Number Publication Date
CN106204619A CN106204619A (en) 2016-12-07
CN106204619B true CN106204619B (en) 2019-07-16

Family

ID=57492217

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610580912.XA Active CN106204619B (en) 2016-07-21 2016-07-21 A kind of target object distribution density determines method and device

Country Status (1)

Country Link
CN (1) CN106204619B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6072903A (en) * 1997-01-07 2000-06-06 Kabushiki Kaisha Toshiba Image processing apparatus and image processing method
CN101025862A (en) * 2007-02-12 2007-08-29 吉林大学 Video based mixed traffic flow parameter detecting method
CN102509151A (en) * 2011-11-08 2012-06-20 上海交通大学 Video-processing-based crowd density and distribution estimation method
CN103295031A (en) * 2013-04-15 2013-09-11 浙江大学 Image object counting method based on regular risk minimization
CN105139378A (en) * 2015-07-31 2015-12-09 小米科技有限责任公司 Card boundary detection method and apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009129001A (en) * 2007-11-20 2009-06-11 Sanyo Electric Co Ltd Operation support system, vehicle, and method for estimating three-dimensional object area

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6072903A (en) * 1997-01-07 2000-06-06 Kabushiki Kaisha Toshiba Image processing apparatus and image processing method
CN101025862A (en) * 2007-02-12 2007-08-29 吉林大学 Video based mixed traffic flow parameter detecting method
CN102509151A (en) * 2011-11-08 2012-06-20 上海交通大学 Video-processing-based crowd density and distribution estimation method
CN103295031A (en) * 2013-04-15 2013-09-11 浙江大学 Image object counting method based on regular risk minimization
CN105139378A (en) * 2015-07-31 2015-12-09 小米科技有限责任公司 Card boundary detection method and apparatus

Also Published As

Publication number Publication date
CN106204619A (en) 2016-12-07

Similar Documents

Publication Publication Date Title
CN110378931A (en) A kind of pedestrian target motion track acquisition methods and system based on multi-cam
CN108717531B (en) Human body posture estimation method based on Faster R-CNN
CN104835175B (en) Object detection method in a kind of nuclear environment of view-based access control model attention mechanism
CN111723721A (en) Three-dimensional target detection method, system and device based on RGB-D
WO2017054314A1 (en) Building height calculation method and apparatus, and storage medium
CN111091075B (en) Face recognition method and device, electronic equipment and storage medium
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN104504723B (en) Image registration method based on remarkable visual features
CN106023187B (en) A kind of method for registering images based on SIFT feature and angle relative distance
CN108416291B (en) Face detection and recognition method, device and system
CN103080979B (en) From the system and method for photo synthesis portrait sketch
CN105809626A (en) Self-adaption light compensation video image splicing method
CN103984920B (en) Three-dimensional face identification method based on sparse representation and multiple feature points
CN104537381B (en) A kind of fuzzy image recognition method based on fuzzy invariant features
Zou et al. Microarray camera image segmentation with Faster-RCNN
Zheng et al. What does plate glass reveal about camera calibration?
CN111626241A (en) Face detection method and device
CN116342519A (en) Image processing method based on machine learning
CN113610926B (en) Camera calibration method based on vanishing point orthogonality
CN103544699B (en) Method for calibrating cameras on basis of single-picture three-circle template
CN110599407B (en) Human body noise reduction method and system based on multiple TOF cameras in downward inclination angle direction
CN112232181A (en) Eagle eye color cognitive antagonism mechanism-simulated unmanned aerial vehicle marine target detection method
CN111489384B (en) Method, device, system and medium for evaluating shielding based on mutual viewing angle
Gardziński et al. Crowd density estimation based on voxel model in multi-view surveillance systems
CN106204619B (en) A kind of target object distribution density determines method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant