CN105335782A - Image-based target object counting method and apparatus - Google Patents

Image-based target object counting method and apparatus Download PDF

Info

Publication number
CN105335782A
CN105335782A CN201410225632.8A CN201410225632A CN105335782A CN 105335782 A CN105335782 A CN 105335782A CN 201410225632 A CN201410225632 A CN 201410225632A CN 105335782 A CN105335782 A CN 105335782A
Authority
CN
China
Prior art keywords
target
image
subimage block
destination object
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410225632.8A
Other languages
Chinese (zh)
Inventor
伍健荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to CN201410225632.8A priority Critical patent/CN105335782A/en
Publication of CN105335782A publication Critical patent/CN105335782A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image-based target object counting method and apparatus. The image-based target object counting method comprises the following steps: extracting multiple sub-image blocks from a current image, wherein each sub-image block is an area which is beyond a border line of a predetermined statistical area and takes the border line as the boundary; for each sub-image block, obtaining a forefront image of the sub-image block, obtaining a fusion target representing a target object going past from each forefront image, comparing the fusion targets obtained from the sub-image blocks of the current image and the same sub-image blocks in at least one previous image with the border line so as to execute counting of the quantity of the target objects; and fusing the statistical result of the multiple sub-image blocks to obtain the quantity of the target objects entering the predetermined statistical area. According to the invention, the quantity of target objects entering a specific area can be counted based on images, and the problem of repeated counting can be solved.

Description

Based on destination object method of counting and the device of image
Technical field
The present invention relates to technical field of image processing, in particular to the destination object method of counting based on image and the destination object counting assembly based on image.
Background technology
Real-time people information is very useful resource for the personal managements such as such as pedestrian's traffic management, passenger and passenger flows estimation and security application.Such as, the statistical figure obtained by the people in statistics shopping center can be used to help enterprise to change Simulation into consumer to strengthen consumer experience, thus effectively improve commerce and trade performance.In addition, the passenger added up in public traffick contributes to improving to the management of public traffick and raises the efficiency.
Especially, if can obtain the people information of a certain regional extent, such as, regional extent before glass showcase, then can understand people for the interested degree of corresponding product or position.The people information with temporal information can analyzed and improvement further.
Summary of the invention
In view of this, the present invention proposes a kind of destination object counting technology based on image newly, at least to solve the problem of the people information how obtaining a certain regional extent.
In view of this, according to an aspect of the present invention, provide a kind of destination object method of counting based on image, comprising: from present image, extract multiple subimage block, subimage block described in each be predetermined statistical regions a boundary line outside and the region being simultaneously border with described boundary line; For each subimage block, obtain the foreground image of described subimage block, and fusion target is obtained from described foreground image, each merges object representation destination object just passed through, the fusion target obtained from the subimage block in described present image and the same subimage block at least in previous image and boundary line are compared, adds up with performance objective number of objects; The statistics of multiple described subimage block is merged, obtains the destination object number entering described predetermined statistical regions.
According to a further aspect in the invention, additionally provide a kind of destination object counting assembly based on image, comprise: extraction unit, from present image, extract multiple subimage block, subimage block described in each be predetermined statistical regions a boundary line outside and the region being simultaneously border with described boundary line; Merge Target Acquisition unit, for each subimage block, obtain the foreground image of described subimage block, and obtain fusion target from described foreground image, each merges object representation destination object just passed through; Counting unit, compares the fusion target obtained from the subimage block in described present image and the same subimage block at least in previous image and boundary line, adds up with performance objective number of objects; Statistics integrated unit, merges the statistics of multiple described subimage block, obtains the destination object number entering described predetermined statistical regions.
According to a further aspect of the invention, additionally provide a kind of electronic equipment, this electronic equipment comprises as above based on the destination object counting assembly of image.
According to a further aspect of the invention, additionally provide a kind of program product storing the instruction code of machine-readable, said procedure product can make above-mentioned machine perform as above based on the destination object method of counting of image when performing.
In addition, according to other aspects of the invention, additionally provide a kind of computer-readable recording medium, it stores program product as above.
The above-mentioned destination object counting assembly based on image according to the embodiment of the present invention, the destination object method of counting based on image and electronic equipment, image outside statistical regions is divided into multiple subimage block, add up the destination object of each subimage block respectively, and fusion treatment is carried out to the destination object of multiple subimage block, can at least realize one of following beneficial effect: the destination object number entering statistical regions can be obtained; Provide a kind of blending algorithm, prevent from repeating statistics.
By below in conjunction with the detailed description of accompanying drawing to most preferred embodiment of the present invention, these and other advantage of the present invention will be more obvious.
Accompanying drawing explanation
The present invention can be better understood by reference to hereinafter given by reference to the accompanying drawings description, wherein employs same or analogous Reference numeral in all of the figs to represent identical or similar parts.Described accompanying drawing comprises in this manual together with detailed description below and forms the part of this instructions, and is used for illustrating the preferred embodiments of the present invention further and explaining principle and advantage of the present invention.In the accompanying drawings:
Fig. 1 shows the schematic diagram of the destination object method of counting based on image according to an embodiment of the invention;
Fig. 2 shows the schematic diagram of definition statistical regions according to another embodiment of the present invention;
Fig. 3 shows the schematic diagram of the destination object method of counting based on image according to another embodiment of the present invention;
Fig. 4 shows the schematic diagram of the subimage block extracted according to an embodiment of the invention;
Fig. 5 shows demographics schematic diagram according to an embodiment of the invention;
Fig. 6 shows the schematic diagram of the destination object method of counting based on image according to still another embodiment of the invention;
Fig. 7 illustrates the example being fused into by candidate target and merging target, and wherein, Fig. 7 (a) represents that the situation that two candidate targets overlap each other, Fig. 7 (b) represent that the distance between candidate target is less than the situation of preset distance;
Fig. 8 shows each border and the corner schematic diagram of statistical regions according to an embodiment of the invention;
Fig. 9 shows the schematic diagram of the subimage block comprising destination object under the scene shown in Fig. 8;
Figure 10 shows the information distribution situation schematic diagram on a timeline of each destination object;
Figure 11 shows according to an embodiment of the invention based on the block diagram of the destination object counting assembly of image.
Embodiment
In order to more clearly understand above-mentioned purpose of the present invention, feature and advantage, below in conjunction with the drawings and specific embodiments, the present invention is further described in detail.It should be noted that, when not conflicting, the feature in the embodiment of the application and embodiment can combine mutually.
Set forth a lot of detail in the following description so that fully understand the present invention, but the present invention can also adopt other to be different from other modes described here and implement, and therefore, the present invention is not limited to the restriction of following public specific embodiment.
Fig. 1 shows the schematic diagram of the destination object method of counting based on image according to an embodiment of the invention.
As shown in Figure 1, can comprise the following steps based on the destination object method of counting of image according to an embodiment of the invention:
Step 102, receives the video image of input, and on image, arranges predetermined statistical regions.Define statistical regions in this step, statistics enters the number of this statistical regions.
Step 104, every bar border of presumptive area can be described as counting line, and statistics is crossed this counting line and entered the number of this statistical regions.
Step 106, combines the statistics of all counting lines and carries out merging treatment, with avoid the same people under Same Scene add up by different counting lines.
In above-mentioned steps 102, as shown in Figure 2, image arranged the boundary line of statistical regions and add up direction accordingly.
According to Fig. 2, statistical regions is a closed polygon, and it comprises some counting lines, and every bar counting line can find out it is the border of statistical regions.In the present embodiment, statistical regions is quadrilateral, comprises four counting lines.Every bar counting line has statistical method, for defining the direction entering this statistical regions.If a people crosses this counting line from the outside and enters this statistical regions, counting operation will be performed.
Above-mentioned steps 104 can comprise two sub-steps, as shown in Figure 3:
Step 302, extracts multiple subimage block from present image.Wherein, subimage block be predetermined statistical regions a boundary line outside and the region being simultaneously border with this boundary line, see Fig. 4.
Step 304, after extracting subimage block, carries out demographics based on each subimage block.Step 304 can comprise again three parts: self-adaptation foreground segmentation, obtains and merges target and count fusion target.
In self-adaptation foreground segmentation, by current subimage block with obtain the foreground image comprising moving region information in subimage block in this prior with reference to subimage block difference, then this foreground image is carried out filtering, foreground image is a binary image, the such as foreground image of Fig. 5 neutron image block.
Then, obtain fusion target from described foreground image, each merges object representation people just passed through.Concrete, once obtain foreground image, just from foreground image, be partitioned into moving target, each moving target is as the candidate target of the people of process.If candidate target number is greater than 0, then according to preset rules, multiple candidate target is merged, obtain merging target, see Fig. 5.
Finally, demographics is performed based on from present image and the fusion target that at least obtains in previous image with comparing of default statistical boundary line.If merging number of targets is zero, then skips this step, otherwise perform this step.
Fig. 6 illustrates the process flow diagram of the concrete process according to the demographic method based on image of the present invention.As shown in Figure 6, foreground image obtaining portion is divided and is comprised step 602-606.Particularly, first, in step 602, to each subimage block (definition of this subimage block, with reference to definition above, does not repeat them here) in present image, the difference image of subimage block and reference picture is calculated.Reference picture is the image block of the same area in the image taken before the present image, and in the initial state, can using average environment background image as reference image.In the calculating of difference image, when inputted image is RGB image, for forming red (R), green (G) of each pixel, blue (B) three kinds of components calculate absolute difference between present image and reference picture respectively, then these absolute differences are averaged the pixel value of the respective pixel obtained in difference image, difference image like this is obtained.Certainly, for gray level image, black, white, ash amount also can be utilized similarly to draw difference image.The difference image obtained is a gray level image.
Then, in step 604, filtering is performed to obtain image after filtering to described difference image.In one embodiment, the present invention adopts median filtering technology.Particularly, for a certain pixel, from the value of all pixels of the periphery of this pixel, find out intermediate value, and using this intermediate value as the value of this pixel.So, image is after filtering obtained.Although make use of median filtering technology in this instructions, other filtering methods conventional in this area also can be utilized.
Then, in step 606, the pixel by being more than or equal to adaptivenon-uniform sampling threshold value for pixel value in described image is after filtering given the first value and is given the second value to other pixels and obtains described foreground image.Particularly, if the value of certain pixel in difference image is more than or equal to adaptivenon-uniform sampling threshold value, then this pixel value be endowed 1 (that is, representing white point) using as foreground pixel, otherwise be endowed for 0 (that is, representing stain, background pixel).Here value is only exemplary, and other value also can adopt, as long as foreground pixel and background pixel distinguish by they.Visible, foreground image is a binary image.
By as above step 602-606, just obtain by the foreground image of the information structure of moving region from the present image of input.Also namely, the impact of environmental background is eliminated.It should be noted that and employ adaptivenon-uniform sampling threshold value in the present invention, wherein, described adaptivenon-uniform sampling threshold value is the greater that the mean value of pixel in image is after filtering multiplied by the predetermined smallest partition threshold value of sum of products that pre-determined factor obtains.Here used pre-determined factor and predetermined smallest partition threshold value rule of thumb set the impact of removing environmental background as far as possible.If pre-determined factor is established too small, then predetermined smallest partition threshold value is as adaptivenon-uniform sampling threshold value, thus white point in the foreground image obtained is many, and the fusion target illustrated below will obtain more fusion target in obtaining.Otherwise, if pre-determined factor is established too much, then will obtain less fusion target.Merge the calculated amount that target too much can increase later process, and merge the very few accuracy that can affect demographics of target, therefore, those skilled in the art rule of thumb can set suitable value to obtain the result of wishing.
After obtaining foreground image as mentioned above, the method proceeds to merge target acquisition part.Continue with reference to figure 6, fusion target obtaining portion is divided and is comprised step 608-612.Particularly, in step 608, to the connected region of the pixel with described first value be comprised in foreground image and meet the first pre-conditioned minimum rectangle alternatively target, each candidate target characterizes a people just passed through, wherein, described first pre-conditioned for be not less than predetermined number as the number with the pixel of described first value in the minimum rectangle of described candidate target.In example described above, the pixel being assigned " 1 " by as foreground pixel, is namely expressed as white point, therefore, comprise white point connected region and wherein the number of the white point minimum rectangle that is not less than predetermined number by alternatively target.
Next, as shown in Figure 6, judge whether to obtain candidate target (step 610).When not obtaining candidate target (step 610, "No"), determine to there is not people in present image, therefore without the need to carrying out step below, thus the method proceeds to step 620 to terminate the process to present image.On the other hand, when obtaining candidate target (step 610, "Yes"), the method proceeds to step 612, to obtain the fusion target representing the people just passed through.
Particularly, in step 612, according to pre-defined rule, fusion is performed to obtain described fusion target to obtained candidate target.Fusion mentioned here refers to satisfactory candidate target set as an intersection, wherein, pre-defined rule comprises: the minimum rectangle comprising the candidate target overlapped each other in obtained candidate target by as described fusion target, as shown in Fig. 7 (a); The minimum rectangle that the distance comprised in obtained candidate target is each other not more than the candidate target of preset distance by as described fusion target, as shown in Fig. 7 (b); And in the candidate target obtained not overlapping with any other candidate target and and distance between any other candidate target be greater than described preset distance candidate target by separately as described fusion target, as shown in Fig. 7 (b).Except above-mentioned restriction, the area merging target should be not less than predetermined area.This is because if the area merging target is too small, this fusion target unlikely represents people, and therefore such region will be left in the basket, thus reduce calculated amount.
When not obtaining fusion target, determine to there is not objects of statistics in current frame image, therefore without the need to carrying out later step, thus the method proceeds to step 620 to terminate the process of current frame image.On the other hand, when obtaining candidate target, the method proceeds to demographics part.
Next, as shown in Figure 6, judge whether to obtain fusion target (step 614).When not obtaining fusion (step 614, "No"), determine to there is not people in present image, therefore without the need to carrying out step below, thus the method proceeds to step 620 to terminate the process to present image.On the other hand, when obtaining fusion target (step 614, "Yes"), the method proceeds to step 616 to carry out subsequent treatment.
As shown in Figure 6, demographics part comprises step 616-620.Particularly, in step 616, from obtained fusion target, find out satisfied second pre-conditioned fusion target merge target as object, wherein, the described second pre-conditioned ratio for merging pixel (that is, the foreground pixel) area occupied and the whole area occupied of this fusion target that have described first value in the fusion target of target as object is greater than estimated rate threshold value.If do not find such object to merge target (step 616, "No"), then the method proceeds to step S411 to end process.
On the other hand, when there being such object fusion target found (step 616, "Yes"), the method proceeds to step 618.
Particularly, in step 618, judge whether the object fusion target in the subimage block of present image exceeds counting line, if, then merge target for the object in the subimage block of present image, find out in the fusion target obtained from least previous image and there is fusion target that maximum and this maximum overlapping area of the overlapping area merging target with this object exceedes predetermined area threshold value merges target the most overlapping fusion target as this object.That is, from image before, find out the position in the past of the people determined in present image.
Afterwards, the object of present image is merged each in target and the most overlapping corresponding fusion target at least in previous image and default statistical boundary line to compare to determine whether increase progressively statistical value, wherein, if object in present image merges that target crosses default statistical boundary line and corresponding the most overlapping fusion target at least in previous image does not cross identical default statistical boundary line, then statistical value increases progressively.That is, the people paid close attention in two continuous frames image spans default statistical boundary, and namely this people passes through, and therefore statistical value increases progressively.
As mentioned above, position in previous image of the position of the people detected in present image and this people and default statistical boundary line are compared, can determine whether that someone passes through.It should be noted that and each subimage block is all as above processed, thus obtain the demographics result corresponding to each subimage block.
In addition, should be noted that, when fusion target the most not overlapping with the fusion target of present image in previous frame image, the present invention can find out and overlapping area satisfactory fusion target the most overlapping with the fusion target of present image from the second two field picture above, determines whether to count thus by comparing with preset boundary line.Such situation has been shown in Fig. 6.By such setting, do not find in the situation of the most overlapping fusion target in previous frame image, the most overlapping target may be found above in the second two field picture, which improves statistical accuracy.Certainly, there is previous frame image and all do not find the situation of the most overlapping fusion target above in the second two field picture.Now, do not count.
After the demographics of present image, method of the present invention proceeds to step 620, to upgrade correlation parameter and reference picture, to carry out the process to next frame image.
Following composition graphs 8 to Figure 10 describes in detail and combines according to the statistics to multiple subimage block of the present invention the processing procedure merged.
When adding up according to subimage block performance objective object, record is crossed boundary line and is entered the information of the destination object of predetermined statistical regions.Such as, people crosses boundary line n in t from position P and enters predetermined statistical regions, then the information be recorded is (n, t, p).If predetermined statistical regions has N number of corner (angles of adjacent two boundary lines), then the value of p is in 1 to N scope.In the present embodiment, predetermined statistical regions has 4 corners, and Fig. 8 shows the position in these corners.If the value of p is 4, then represent that destination object is through corner 4, if the value of p is 0, then represent that destination object is not through any corner.
Suppose that Fig. 8 is current frame image, so can extract multiple subimage block from the current frame image shown in Fig. 8, Fig. 9 shows three subimage blocks with destination object.Destination object I enters predetermined statistical regions from corner 4, is added up by boundary line 4 and 3 simultaneously; Destination object II enters predetermined statistical regions from corner 1, is added up by boundary line 4 and 1 simultaneously; Destination object III enters predetermined statistical regions through boundary line 2, is added up by boundary line 2.Therefore, the destination object entering predetermined statistical regions from corner can be repeated statistics, the present invention proposes a kind of disposal route solving repetition statistical problem:
If judge the identical through position (corner namely passed) of two destination objects recorded, and the difference between the timing statistics point of described two destination objects is less than preset time period, then determine that described two destination objects are the same destination object that described synchronization enters described predetermined statistical regions, ignore the counting of the posterior destination object of timing statistics point in described two destination objects.
As shown in Figure 10, due to destination object (1, t, 1) and destination object (2, t+k, 1) through same corner 1, and k is less than preset time period, therefore destination object (1, t, 1) with destination object (2, t+k, 1) be the same destination object that synchronization enters described predetermined statistical regions, i.e. destination object (2, t+k, 1) repeated statistics, calculate for the ease of subsequent statistical, give up this destination object (2, t+k, 1) i.e. the destination object of the posterior repetition of time point.
Figure 11 shows the block diagram of destination object counting assembly according to an embodiment of the invention.
As shown in figure 11, can comprise based on the destination object counting assembly 1100 of image according to an embodiment of the invention:
Extraction unit 1102, extracts multiple subimage block from present image, each subimage block be predetermined statistical regions a boundary line outside and be simultaneously the region on border with boundary line;
Merge Target Acquisition unit 1104, for each subimage block, obtain the foreground image of subimage block, and from foreground image, obtain fusion target, each merges object representation destination object just passed through;
Counting unit 1106, compares the fusion target obtained from the subimage block in present image and the same subimage block at least in previous image and boundary line, adds up with performance objective number of objects;
Statistics integrated unit 1108, merges multiple statistics based on subimage block, obtains the destination object number entering predetermined statistical regions.
Wherein, counting unit 1106 comprises: record cell 1106A, and when adding up according to subimage block performance objective object, record is crossed boundary line and entered the information of each destination object of predetermined statistical regions.
Statistics integrated unit 1108 comprises: judging unit 1108A, according to the information of destination object, judges whether the same destination object entering predetermined statistical regions at synchronization is repeated statistics; Screening unit 1108B, carries out Screening Treatment according to judged result to statistics.
Wherein, information comprises timing statistics point, through position, is the angular position that two boundary lines of predetermined statistical regions are intersected through position; Judging unit 1108A is also for two destination objects identical through position judging to record, and the difference between the timing statistics point of two destination objects is when being less than preset time period, determine that two destination objects are the same destination object that synchronization enters predetermined statistical regions; Screening unit 1108B ignores the counting of the posterior destination object of timing statistics point in two destination objects.
Wherein, merge Target Acquisition unit 1104 to comprise:
Candidate target determining unit 1104A, the connected region of the pixel of preset value will be had in foreground image and meet the first pre-conditioned minimum rectangle alternatively target, each candidate target characterizes the candidate of the destination object that is just being passed through, wherein, the number with the pixel of preset value in the first pre-conditioned minimum rectangle being alternatively target is not less than predetermined number; Integrated unit 1104B, performs obtained candidate target and merges to obtain merging target.
Counting unit 1106 comprises:
Choose unit 1106B, from obtained fusion target, find out satisfied second pre-conditioned fusion target merge target as object, wherein, the second pre-conditioned ratio for merging the pixel area occupied and the whole area occupied of this fusion target that have preset value in the fusion target of target as object is greater than estimated rate threshold value; Matching unit 1106C, merge target for each object in the subimage block of present image, find out in the fusion target obtained from the same subimage block of at least previous image and there is fusion target that maximum and this maximum overlapping area of the overlapping area merging target with this object exceedes predetermined area threshold value merges target the most overlapping fusion target as this object; Comparing unit 1106D, the object of the subimage block of present image is merged the most overlapping corresponding fusion target in the same subimage block of each and at least previous image in target and boundary line compares to determine whether increase progressively statistical value, wherein, if object in the subimage block of present image merges the corresponding the most overlapping fusion target that target crosses in the same subimage block of boundary line and at least previous image do not cross identical boundary line, then statistical value increases progressively.
Destination object counting assembly based on image according to the present invention defines statistical regions in the picture, and extract the subimage block corresponding with each boundary line of statistical regions, to each subimage block performance objective object count algorithm, then the statistics of each subimage block is merged, obtain the destination object number entering this statistical regions, therefore, it is possible to understand attention rate suffered by corresponding product or position or interest-degree according to this statistics, the result be simultaneously combined carries out deduplication process, solve same destination object by the problem of repeat count, thus improve counting accuracy rate.
In addition, embodiments of the invention additionally provide a kind of electronic equipment, and this electronic equipment comprises the destination object counting assembly based on image as above.In the specific implementation of above-mentioned according to an embodiment of the invention electronic equipment, above-mentioned electronic equipment can be any one equipment in following equipment: computing machine; Panel computer; Personal digital assistant; Multimedia play equipment; Mobile phone and electric paper book etc.Wherein, this electronic equipment has the above-mentioned various function for the destination object counting assembly based on image and technique effect, repeats no more here.
Above-mentionedly can be configured by the mode of software, firmware, hardware or its combination in any based on each component units, subelement, module etc. in the destination object counting assembly of image according to an embodiment of the invention.When being realized by software or firmware, to the machine with specialized hardware structure, the program forming this software or firmware can be installed from storage medium or network, this machine, when being provided with various program, can perform the various functions of above-mentioned each component units, subelement.
In addition, the invention allows for a kind of program product storing the instruction code of machine-readable.When above-mentioned instruction code is read by machine and performs, can perform above-mentioned according to an embodiment of the invention based on the destination object method of counting of image.Correspondingly, the various storage mediums for the such as disk, CD, magneto-optic disk, semiconductor memory etc. that carry this program product are also included within of the present invention disclosing.
In addition, the method for various embodiments of the present invention is not limited to describe the to specifications or time sequencing shown in accompanying drawing performs, also can according to other time sequencing, perform concurrently or independently.Therefore, the execution sequence of the method described in this instructions is not construed as limiting technical scope of the present invention.
In addition, obviously, also can realize in the mode being stored in the computer executable program in various machine-readable storage medium according to each operating process of said method of the present invention.
And, object of the present invention also can be realized by following manner: the storage medium storing above-mentioned executable program code is supplied to system or equipment directly or indirectly, and computing machine in this system or equipment or CPU (central processing unit) read and perform said procedure code.
Now, as long as this system or equipment have the function of executive routine, then embodiments of the present invention are not limited to program, and this program also can be arbitrary form, such as, the program that performs of target program, interpreter or be supplied to the shell script etc. of operating system.
These machinable mediums above-mentioned include but not limited to: various storer and storage unit, semiconductor equipment, and disc unit is light, magnetic and magneto-optic disk such as, and other is suitable for the medium etc. of storage information.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1., based on a destination object method of counting for image, comprising:
From present image, extract multiple subimage block, subimage block described in each be predetermined statistical regions a boundary line outside and the region being simultaneously border with described boundary line;
For each subimage block, obtain the foreground image of described subimage block, and obtain fusion target from described foreground image, each merges object representation destination object just passed through;
The fusion target obtained from the subimage block in described present image and the same subimage block at least in previous image and boundary line are compared, adds up with performance objective number of objects;
Multiple statistics based on described subimage block is merged, obtains the destination object number entering described predetermined statistical regions.
2. the destination object method of counting based on image according to claim 1, wherein, when according to described subimage block performance objective object statistics, record is crossed described boundary line and is entered the information of each destination object of described predetermined statistical regions;
Carry out fusion to the statistics of multiple described subimage block to comprise: according to the information of described destination object, judge whether the same destination object entering described predetermined statistical regions at synchronization is repeated statistics;
According to judged result, Screening Treatment is carried out to statistics.
3. the destination object method of counting based on image according to claim 2, wherein, described information comprises timing statistics point, through position, the described angular position through position being two boundary lines of described predetermined statistical regions and intersecting;
Whether the same destination object that described judgement enters described predetermined statistical regions at synchronization is repeated statistics, comprising:
If two destination objects of record is identical through position, and the difference between the timing statistics point of described two destination objects is less than preset time period, then determine that described two destination objects are the same destination object that described synchronization enters described predetermined statistical regions;
Ignore the counting of the posterior destination object of timing statistics point in described two destination objects.
4. the destination object method of counting based on image according to claim 1, obtains described fusion target and comprises:
The connected region of the pixel of preset value will be had in described foreground image and meet the first pre-conditioned minimum rectangle alternatively target, each candidate target characterizes the candidate of the destination object that is just being passed through, wherein, described first pre-conditioned for be not less than predetermined number as the number with the pixel of described preset value in the minimum rectangle of described candidate target;
Obtained candidate target is performed and merges to obtain described fusion target.
5. the destination object method of counting based on image according to any one of claim 1 to 4, described performance objective number of objects statistics, comprising:
Find out from obtained fusion target and meet pre-conditioned fusion target and merge target as object, wherein, the described pre-conditioned ratio for merging the pixel area occupied and the whole area occupied of this fusion target that have described preset value in the fusion target of target as described object is greater than estimated rate threshold value;
Merge target for each object in the subimage block of present image, find out in the fusion target obtained from the same subimage block of at least previous image and there is fusion target that maximum and this maximum overlapping area of the overlapping area merging target with this object exceedes predetermined area threshold value merges target the most overlapping fusion target as this object;
The object of the subimage block of present image is merged the most overlapping corresponding fusion target in the same subimage block of each and at least previous image in target and boundary line compares to determine whether increase progressively statistical value,
Wherein, if object in the subimage block of present image merges the corresponding the most overlapping fusion target that target crosses in the same subimage block of boundary line and at least previous image do not cross identical boundary line, then statistical value increases progressively.
6., based on a destination object counting assembly for image, comprising:
Extraction unit, extracts multiple subimage block from present image, subimage block described in each be predetermined statistical regions a boundary line outside and the region being simultaneously border with described boundary line;
Merge Target Acquisition unit, for each subimage block, obtain the foreground image of described subimage block, and obtain fusion target from described foreground image, each merges object representation destination object just passed through;
Counting unit, compares the fusion target obtained from the subimage block in described present image and the same subimage block at least in previous image and boundary line, adds up with performance objective number of objects;
Statistics integrated unit, merges multiple statistics based on described subimage block, obtains the destination object number entering described predetermined statistical regions.
7. the destination object counting assembly based on image according to claim 6, described counting unit comprises: record cell, when according to described subimage block performance objective object statistics, record is crossed described boundary line and is entered the information of each destination object of described predetermined statistical regions;
Described statistics integrated unit comprises:
Judging unit, according to the information of described destination object, judges whether the same destination object entering described predetermined statistical regions at synchronization is repeated statistics;
Screening unit, carries out Screening Treatment according to judged result to statistics.
8. the destination object counting assembly based on image according to claim 7, wherein, described information comprises timing statistics point, through position, the described angular position through position being two boundary lines of described predetermined statistical regions and intersecting;
Described judging unit is also for two destination objects identical through position judging to record, and the difference between the timing statistics point of described two destination objects is when being less than preset time period, determine that described two destination objects are the same destination object that described synchronization enters described predetermined statistical regions;
Described screening unit ignores the counting of the posterior destination object of timing statistics point in described two destination objects.
9. the destination object counting assembly based on image according to claim 6, described fusion Target Acquisition unit comprises:
Candidate target determining unit, the connected region of the pixel of preset value will be had in described foreground image and meet the first pre-conditioned minimum rectangle alternatively target, each candidate target characterizes the candidate of the destination object that is just being passed through, wherein, described first pre-conditioned for be not less than predetermined number as the number with the pixel of described preset value in the minimum rectangle of described candidate target;
Integrated unit, performs obtained candidate target and merges to obtain described fusion target.
10. the destination object counting assembly based on image according to any one of claim 6 to 9, described counting unit comprises:
Choose unit, from obtained fusion target, find out satisfied second pre-conditioned fusion target merge target as object, wherein, the described second pre-conditioned ratio for merging the pixel area occupied and the whole area occupied of this fusion target that have described preset value in the fusion target of target as described object is greater than estimated rate threshold value;
Matching unit, merge target for each object in the subimage block of present image, find out in the fusion target obtained from the same subimage block of at least previous image and there is fusion target that maximum and this maximum overlapping area of the overlapping area merging target with this object exceedes predetermined area threshold value merges target the most overlapping fusion target as this object;
Comparing unit, merges by the object of the subimage block of present image with the most overlapping corresponding fusion target in the same subimage block of each and at least previous image in target and boundary line compares to determine whether increase progressively statistical value,
Wherein, if object in the subimage block of present image merges the corresponding the most overlapping fusion target that target crosses in the same subimage block of boundary line and at least previous image do not cross identical boundary line, then statistical value increases progressively.
CN201410225632.8A 2014-05-26 2014-05-26 Image-based target object counting method and apparatus Pending CN105335782A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410225632.8A CN105335782A (en) 2014-05-26 2014-05-26 Image-based target object counting method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410225632.8A CN105335782A (en) 2014-05-26 2014-05-26 Image-based target object counting method and apparatus

Publications (1)

Publication Number Publication Date
CN105335782A true CN105335782A (en) 2016-02-17

Family

ID=55286297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410225632.8A Pending CN105335782A (en) 2014-05-26 2014-05-26 Image-based target object counting method and apparatus

Country Status (1)

Country Link
CN (1) CN105335782A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109671241A (en) * 2017-10-16 2019-04-23 中国电信股份有限公司 Alarm method and system
CN110009611A (en) * 2019-03-27 2019-07-12 中南民族大学 A kind of sensation target dynamic itemset counting method and system towards image sequence

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004021375A (en) * 2002-06-13 2004-01-22 Av Planning Center:Kk Object counting method and object counting device
CN101021949A (en) * 2007-03-23 2007-08-22 中山大学 Automatic monitoring method for miner entry and exit of coal mine
DE102007041333A1 (en) * 2007-08-31 2009-03-05 Siemens Ag Österreich Target object i.e. person, contactless counting method for e.g. determination of actual load of production facility, involves identifying breakage or turning of graphical cluster in direction of time axis as object passing area
CN101835034A (en) * 2010-05-27 2010-09-15 王巍 Crowd characteristic counting system
CN102270347A (en) * 2011-08-05 2011-12-07 上海交通大学 Target detection method based on linear regression model
CN103810722A (en) * 2014-02-27 2014-05-21 云南大学 Moving target detection method combining improved LBP (Local Binary Pattern) texture and chrominance information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004021375A (en) * 2002-06-13 2004-01-22 Av Planning Center:Kk Object counting method and object counting device
CN101021949A (en) * 2007-03-23 2007-08-22 中山大学 Automatic monitoring method for miner entry and exit of coal mine
DE102007041333A1 (en) * 2007-08-31 2009-03-05 Siemens Ag Österreich Target object i.e. person, contactless counting method for e.g. determination of actual load of production facility, involves identifying breakage or turning of graphical cluster in direction of time axis as object passing area
CN101835034A (en) * 2010-05-27 2010-09-15 王巍 Crowd characteristic counting system
CN102270347A (en) * 2011-08-05 2011-12-07 上海交通大学 Target detection method based on linear regression model
CN103810722A (en) * 2014-02-27 2014-05-21 云南大学 Moving target detection method combining improved LBP (Local Binary Pattern) texture and chrominance information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘琼 等: ""基于视觉注意模型化计算的行人目标检测"", 《北京信息科技大学学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109671241A (en) * 2017-10-16 2019-04-23 中国电信股份有限公司 Alarm method and system
CN110009611A (en) * 2019-03-27 2019-07-12 中南民族大学 A kind of sensation target dynamic itemset counting method and system towards image sequence
CN110009611B (en) * 2019-03-27 2021-05-14 中南民族大学 Visual target dynamic counting method and system for image sequence

Similar Documents

Publication Publication Date Title
CN106254933B (en) Subtitle extraction method and device
US20210049748A1 (en) Method and Apparatus for Enhancing Stereo Vision
Li et al. Video object cut and paste
US8355079B2 (en) Temporally consistent caption detection on videos using a 3D spatiotemporal method
Deng et al. A symmetric patch-based correspondence model for occlusion handling
CN112862685B (en) Image stitching processing method, device and electronic system
US20100067863A1 (en) Video editing methods and systems
JP2012038318A (en) Target detection method and device
Fan et al. An improved automatic isotropic color edge detection technique
US9401027B2 (en) Method and apparatus for scene segmentation from focal stack images
CN102883175A (en) Methods for extracting depth map, judging video scene change and optimizing edge of depth map
CN101510304B (en) Method, device and pick-up head for dividing and obtaining foreground image
CN103714314B (en) Television video station caption identification method combining edge and color information
François Real-time multi-resolution blob tracking
EP3043315B1 (en) Method and apparatus for generating superpixels for multi-view images
CN111080723A (en) Image element segmentation method based on Unet network
CN108875589B (en) Video detection method for road area
CN105335782A (en) Image-based target object counting method and apparatus
Arvanitidou et al. Motion-based object segmentation using hysteresis and bidirectional inter-frame change detection in sequences with moving camera
Doulamis et al. Unsupervised semantic object segmentation of stereoscopic video sequences
CN103839035A (en) Person number statistical method and person number statistical system
JP7144384B2 (en) Object detection device, method and program
CN103049738A (en) Method for segmenting multiple vehicles connected through shadows in video
CN113724153A (en) Method for eliminating redundant images based on machine learning
Samanta et al. A Novel Approach of Entropy based Adaptive Thresholding Technique for Video Edge Detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160217

WD01 Invention patent application deemed withdrawn after publication