CN103810696A - Method for detecting image of target object and device thereof - Google Patents
Method for detecting image of target object and device thereof Download PDFInfo
- Publication number
- CN103810696A CN103810696A CN201210463919.5A CN201210463919A CN103810696A CN 103810696 A CN103810696 A CN 103810696A CN 201210463919 A CN201210463919 A CN 201210463919A CN 103810696 A CN103810696 A CN 103810696A
- Authority
- CN
- China
- Prior art keywords
- image
- region
- destination object
- existing
- images match
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for detecting the image of a target object. The method comprises the following steps: obtaining the coordinate position of an image region in the last frame of the image when the image region which is matched with the image of a target object exists in the preset region of the last frame of the image of the current frame of the image; detecting whether the image region which is matched with the image of the target object exists in a first set region of the obtained coordinate position contained in the current frame of the image, wherein the first set region is located in the preset region; determining that the image region which is matched with the image of the target object exists in the preset region of the current frame of the image when the image region which is matched with the image of the target object exists in the first set region. Through the adoption of the method and the device provided by the invention, the problem of lower efficiency in the process of detecting the image of the target object is solved.
Description
Technical field
The present invention relates to field of video monitoring, relate in particular to a kind of destination object image detecting method and device.
Background technology
Video monitoring is the important component part of security system, is widely used in already many occasions of every profession and trade.Video monitoring, after having experienced digitizing and networking, presents more and more intelligentized development trend at present.
In a lot of video monitoring scenes, all need destination object, as whether people, animal, vehicle etc. are present in that predeterminable area detects, tracking and behavioural analysis.In prison, need specific operator on duty stand guard or go on patrol, if operator on duty leaves the post without authorization, cannot normally carry out the processing of event, may cause uncontrollable accident, therefore, need to detect whether operator on duty is on duty.
In the panoramic picture that the common scheme adopting is the predeterminable area in the every frame video image to obtaining in the prior art, detect, determine whether to exist destination object, although this scheme can realize, whether destination object is present in predeterminable area and is detected, but detect in the panoramic picture of the predeterminable area in every two field picture, because surveyed area is larger, so efficiency is lower.
Summary of the invention
The embodiment of the present invention provides a kind of destination object image detecting method and device, in order to solve the lower problem of efficiency in the time that destination object image is detected existing in prior art.
The embodiment of the present invention provides a kind of destination object image detecting method, comprising:
In the time existing with the image-region of destination object images match in the predeterminable area of the previous frame image of current frame image, obtain the coordinate position of described image-region in described previous frame image;
In described current frame image, comprise and in the first setting regions of the described coordinate position obtaining, detect the image-region whether existing with described destination object images match; Described the first setting regions is positioned at described predeterminable area;
In the time existing with the image-region of described destination object images match in described the first setting regions, determine the image-region existing in the predeterminable area of described current frame image with described destination object images match.
The embodiment of the present invention provides a kind of destination object image detection device, comprising:
Acquiring unit, in the time existing with the image-region of destination object images match in the predeterminable area of the previous frame image of current frame image, obtains the coordinate position of described image-region in described previous frame image;
Detecting unit, for detecting the image-region whether existing with described destination object images match in the first setting regions that comprises the described coordinate position obtaining at described current frame image; Described the first setting regions is positioned at described predeterminable area;
Determining unit, in the time existing with the image-region of described destination object images match in described the first setting regions, determines the image-region existing in the predeterminable area of described current frame image with described destination object images match.
Beneficial effect of the present invention comprises:
The method that the embodiment of the present invention provides, if there is the image-region with destination object images match in the previous frame image of current frame image, obtain its position coordinates, carry out the detection of destination object image in current frame image time, in the first setting regions that comprises this position coordinates, detect, due in adjacent two two field pictures, may be constant or change less in the position of destination object image, appear at probability in the first setting regions larger, and because the first setting regions is less than predeterminable area, thereby can improve detection efficiency.
Accompanying drawing explanation
Accompanying drawing is used to provide a further understanding of the present invention, and forms a part for instructions, is used from explanation the present invention with the embodiment of the present invention one, is not construed as limiting the invention.In the accompanying drawings:
The process flow diagram of the destination object image detecting method that Fig. 1 provides for the embodiment of the present invention;
The detail flowchart of the destination object image detecting method that Fig. 2 provides for the embodiment of the present invention 1;
The detail flowchart of the destination object image detecting method that Fig. 3 provides for the embodiment of the present invention 2;
The detail flowchart of the destination object image detecting method that Fig. 4 provides for the embodiment of the present invention 3;
The structural representation of the destination object image detection device that Fig. 5 provides for the embodiment of the present invention 4.
Embodiment
In order to provide the implementation that improves destination object image detection efficiency, the embodiment of the present invention provides a kind of destination object image detecting method and device, below in conjunction with Figure of description, the preferred embodiments of the present invention are described, be to be understood that, preferred embodiment described herein only, for description and interpretation the present invention, is not intended to limit the present invention.And in the situation that not conflicting, the feature in embodiment and embodiment in the application can combine mutually.
The embodiment of the present invention provides a kind of destination object image detecting method, as shown in Figure 1, comprising:
Step 101, in the time existing with the image-region of destination object images match in the predeterminable area of the previous frame image of current frame image, obtain the coordinate position of this image-region in previous frame image.
Step 102, in current frame image, comprise and in the first setting regions of this coordinate position obtaining, detect the image-region whether existing with destination object images match; This first setting regions is positioned at predeterminable area.
Step 103, in the time existing with the image-region of destination object images match in the first setting regions, determine the image-region existing in the predeterminable area of current frame image with destination object images match.
Further, in the time not existing with the image-region of destination object images match in the first setting regions, can determine the image-region not existing in the predeterminable area of current frame image with destination object images match.
But, in the time not existing with the image-region of destination object images match in the first setting regions, directly determine the scheme not existing in the predeterminable area of current frame image with the image-region of destination object images match, may affect the accuracy to destination object image detection.
Preferably, in the time not existing with the image-region of destination object images match in the first setting regions, can also judge in the predeterminable area of current frame image whether have foreground image; When determining while there is not foreground image in the predeterminable area of current frame image, determine the image-region not existing in the predeterminable area of current frame image with destination object images match; When determining while there is foreground image in the predeterminable area of current frame image, in current frame image, comprise in the second setting regions of this foreground image and detect the image-region whether existing with destination object images match; This second setting regions is positioned at predeterminable area.
In the time existing with the image-region of destination object images match in the second setting regions, determine the image-region existing in the predeterminable area of current frame image with destination object images match; In the time not existing with the image-region of destination object images match in the second setting regions, determine the image-region not existing in the predeterminable area of current frame image with destination object images match.
Above-mentioned steps implementing precondition in the inventive method is to exist and the image-region of destination object images match in the predeterminable area of the previous frame image of current frame image, in the time not existing with the image-region of destination object images match in the predeterminable area of the previous frame image of current frame image, directly in the predeterminable area of current frame image, detect the image-region whether existing with destination object images match.
In target area (comprising the first setting regions, the second setting regions and predeterminable area), whether with the image-region of destination object images match specifically can in the following way in detection if existing:
Mode one: detect in target area whether have the image-region that is greater than texture similarity threshold with the gray scale texture similarity of destination object image; When existing in target area while being greater than the image-region of texture similarity threshold with the gray scale texture similarity of destination object image, determine and, exist and the image-region of destination object images match the target area in; When not existing in target area while being greater than the image-region of texture similarity threshold with the gray scale texture similarity of destination object image, determine the image-region not existing in target area with destination object images match;
Mode two: whether have the image-region that image similarity is greater than image similarity threshold value of cutting apart with destination object image in detection target area; In the time existing with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of destination object image in target area, determine and, exist and the image-region of destination object images match the target area in; In the time not existing with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of destination object image in target area, determine the image-region not existing in target area with destination object images match;
Mode three: detect in target area and whether exist with the gray scale texture similarity of destination object image and be greater than texture similarity threshold, and with the image-region that image similarity is greater than image similarity threshold value of cutting apart of destination object image; Be greater than texture similarity threshold when existing in target area with the gray scale texture similarity of destination object image, and during with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of destination object image, determine the image-region existing in target area with destination object images match; When not existing the gray scale texture similarity with destination object image to be greater than texture similarity threshold in target area, and during with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of destination object image, determine the image-region not existing in target area with destination object images match.
Gray scale texture similarity based on image, and/or cutting apart image similarity, in target area, to detect the method whether existing with the image-region of destination object images match be only an example, other detection method also can be used as the specific implementation of this step.
Further, the image-region that meets multiple matching conditions can be defined as and definite image-region of destination object images match, the image-region that only meets one of the plurality of matching condition is defined as and the latent image region of destination object images match.
In the time existing with definite image-region of destination object images match in the predeterminable area of the previous frame image of current frame image, obtain this and determine the coordinate position of image-region in previous frame image; When not existing in the predeterminable area of the previous frame image of current frame image and definite image-region of destination object images match, and, obtain this coordinate position of latent image region in previous frame image when in the latent image region of the interior existence of the predeterminable area of the previous frame image of current frame image and destination object images match.
In current frame image, comprise and in the first setting regions of this coordinate position obtaining, detect the definite image-region whether existing with destination object images match, and whether exist and the latent image region of destination object images match; In the time existing with definite image-region of destination object images match in the first setting regions, determine the definite image-region existing in the predeterminable area of current frame image with destination object images match; In the time existing with the latent image region of destination object images match in the first setting regions, determine the latent image region existing in the predeterminable area of current frame image with destination object images match.
Below in conjunction with accompanying drawing, method provided by the invention and device are described in detail with specific embodiment.
Embodiment 1:
The detail flowchart that Figure 2 shows that the destination object image detection that the embodiment of the present invention 1 provides, specifically comprises following treatment step:
In the time not existing with the image-region of destination object images match in the predeterminable area of previous frame image of determining current frame image, enter step 202; In the time existing with the image-region of destination object images match in the predeterminable area of previous frame image of determining current frame image, enter step 203.
In the present embodiment, in the target area such as the first setting regions, the second setting regions in the predeterminable area in this step 202 and subsequent step, detect the concrete aforesaid way three that adopts while whether existing with the image-region of destination object images match:
Detect in target area and whether exist with the gray scale texture similarity of destination object image and be greater than texture similarity threshold, and with the image-region that image similarity is greater than image similarity threshold value of cutting apart of destination object image; Be greater than texture similarity threshold when existing in target area with the gray scale texture similarity of destination object image, and during with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of destination object image, determine the image-region existing in target area with destination object images match; When not existing the gray scale texture similarity with destination object image to be greater than texture similarity threshold in target area, and during with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of destination object image, determine the image-region not existing in target area with destination object images match.
Concrete, can be based on LBP(LocalBinaryPattern, local binary patterns) numerical value, carries out the calculating of gray scale texture similarity; Cut apart based on grey level histogram, cut apart the calculating of image similarity.
Directly enter step 207 or step 208 according to testing result.
When determining while not existing with the image-region of destination object images match in the first setting regions that comprises this coordinate position obtaining in current frame image, enter step 205; When determining while existing with the image-region of destination object images match in the first setting regions that comprises this coordinate position obtaining in current frame image, enter step 207.
In the time there is foreground image in the predeterminable area of determining current frame image, enter step 206; In the time there is not foreground image in the predeterminable area of determining current frame image, enter step 208.
In the present embodiment, can adopt any background modeling method generation background image, and background image also to constantly be learnt to upgrade.According to the predeterminable area of background image and current frame image, judge in the predeterminable area of current frame image whether have foreground image.
When determining while existing with the image-region of destination object images match in the second setting regions that comprises this foreground image in current frame image, enter step 207; When determining while not existing with the image-region of destination object images match in the second setting regions that comprises this foreground image in current frame image, enter step 208.
Preferably, can in the second setting regions, detect in the region beyond the first setting regions.
Finish testing process in current frame image, and record the coordinate position of this image-region in current frame image, use for detect this destination object image in the predeterminable area of the next frame image of current frame image time.
When method in above-described embodiment 1 is applied to operator on duty's detection on duty, operator on duty's image is the destination object image in above-described embodiment 1, this operator on duty's image can be stored in advance, also can arrange in real time afterwards operator on duty is on duty; Operator on duty's movable scope corresponding region in image is the predeterminable area in above-described embodiment 1, the movable scope of operator on duty of standing guard is less, corresponding predeterminable area is also less, and the movable scope of operator on duty of patrol is larger, and corresponding predeterminable area is also larger.
It should be noted that background image is that unceasing study upgrades, long-time actionless object will be learned in background image.In the time that embodiment 1 method is applied to operator on duty's detection on duty, because no matter operator on duty stands guard or go on patrol, all may there is the situation of the motionless or microinching of operator on duty, therefore should reduce the pace of learning of background image, to improve the accuracy to operator on duty's image detection.
And, in actual applications, monitoring scene complexity, can not only judge according to the testing result of a two field picture whether operator on duty leaves the post, the testing result of each two field picture can be preserved, while all there is not operator on duty's image if set continuously in the predeterminable area of number of frames picture, just determine that operator on duty leaves the post, send the report of leaving the post, start and report to the police.
The method that adopts the embodiment of the present invention 1 to provide, according to the position coordinates in the previous frame image of current frame image with the image-region of destination object images match, in current frame image, comprise the detection of carrying out destination object image in the less region of this position coordinates, can improve detection efficiency; And in the time not existing with the image-region of destination object images match in the region that comprises this position coordinates of current frame image, further in the foreground image of predeterminable area, detect, can guarantee the accuracy detecting.
Preferably, gray scale texture similarity that exist in target area and destination object image can also be greater than to texture similarity threshold, but the image-region that image similarity is not more than image similarity threshold value of cutting apart with destination object image, or the gray scale texture similarity with destination object image existing in target area is not more than texture similarity threshold, but be defined as latent image region with the image-region that image similarity is greater than image similarity threshold value of cutting apart of destination object image; Gray scale texture similarity that exist in target area and destination object image is greater than to texture similarity threshold, and is defined as determining image-region with the image-region that image similarity is greater than image similarity threshold value of cutting apart of destination object image.Now, the treatment scheme of corresponding destination object image detection refers to embodiment 2.
Embodiment 2:
The detail flowchart that Figure 3 shows that the destination object image detection that the embodiment of the present invention 2 provides, specifically comprises following treatment step:
In the time not existing with definite image-region of destination object images match in the predeterminable area of previous frame image of determining current frame image, enter step 302; In the time existing with definite image-region of destination object images match in the predeterminable area of previous frame image of determining current frame image, enter step 304.
In the time not existing with the latent image region of destination object images match in the predeterminable area of previous frame image of determining current frame image, enter step 303; In the time existing with the latent image region of destination object images match in the predeterminable area of previous frame image of determining current frame image, enter step 307.
Directly enter step 309, step 310 or step 311 according to testing result.
When determining while not existing in this first setting regions with definite image-region of destination object images match, enter step 306; When determining while existing in this first setting regions with definite image-region of destination object images match, enter step 309; When determining while existing in this first setting regions with the latent image region of destination object images match, enter step 311.
In the time existing with the latent image region of destination object images match in the predeterminable area of previous frame image of determining current frame image, enter step 307; In the time not existing with the latent image region of destination object images match in the predeterminable area of previous frame image of determining current frame image, enter step 310.
When determining while existing in this first setting regions with definite image-region of destination object images match, enter step 309; When determining while not existing in this first setting regions with definite image-region of destination object images match, enter step 310; When determining while existing in this first setting regions with the latent image region of destination object images match, enter step 311.
Step 309, the interior existence of predeterminable area of determining current frame image and definite image-region of destination object images match.
Preferably, in the time all not existing with definite image-region of destination object images match in the predeterminable area of continuous setting number of frames image, start and report to the police.
The method that adopts the embodiment of the present invention 2 to provide, for present frame picture, in less region, carry out the detection of destination object image, can improve detection efficiency, and also detect in the first setting regions that comprises the coordinate position of latent image region in previous frame image, with respect to only comprising the scheme detecting in the first setting regions of determining the coordinate position of image-region in previous frame image, can improve the accuracy of detection.
On the basis of raising the efficiency, in order further to improve the accuracy of destination object image detection, can also adopt the method in following embodiment 3.
Embodiment 3:
The detail flowchart that Figure 4 shows that the destination object image detection that the embodiment of the present invention 3 provides, specifically comprises following treatment step:
In the time not existing with definite image-region of destination object images match in the predeterminable area of previous frame image of determining current frame image, enter step 402; In the time existing with definite image-region of destination object images match in the predeterminable area of previous frame image of determining current frame image, enter step 404.
In the time not existing with the latent image region of destination object images match in the predeterminable area of previous frame image of determining current frame image, enter step 403; In the time existing with the latent image region of destination object images match in the predeterminable area of previous frame image of determining current frame image, enter step 407.
Directly enter step 411, step 412 or step 413 according to testing result.
When determining while not existing in this first setting regions with definite image-region of destination object images match, enter step 406; When determining while existing in this first setting regions with definite image-region of destination object images match, enter step 411; When determining while existing in this first setting regions with the latent image region of destination object images match, enter step 413.
In the time existing with the latent image region of destination object images match in the predeterminable area of previous frame image of determining current frame image, enter step 407; In the time not existing with the latent image region of destination object images match in the predeterminable area of previous frame image of determining current frame image, enter step 412.
When determining while not existing in this first setting regions with definite image-region of destination object images match, enter step 409; When determining while existing in this first setting regions with definite image-region of destination object images match, enter step 411; When determining while existing in this first setting regions with the latent image region of destination object images match, enter step 413.
In the time there is foreground image in the predeterminable area of determining current frame image, enter step 410; In the time there is not foreground image in the predeterminable area of determining current frame image, enter step 412.
When determining while existing in this second setting regions with definite image-region of destination object images match, enter step 411; When determining while not existing in this second setting regions with the image-region of destination object images match, enter step 412; When determining while existing in this second setting regions with the latent image region of destination object images match, enter step 413.
Preferably, can in the second setting regions, detect in the region beyond the first setting regions.
Preferably, in the time all not existing with definite image-region of destination object images match in the predeterminable area of continuous setting number of frames image, start and report to the police.
The method that adopts the embodiment of the present invention 3 to provide, the method providing with respect to embodiment 2 further detects in the foreground image of predeterminable area, can on the basis of improving detection efficiency, further improve the accuracy detecting.
Embodiment 4:
Based on same inventive concept, the destination object image detecting method method providing according to the above embodiment of the present invention, correspondingly, another embodiment of the present invention also provides destination object image detection device, and apparatus structure schematic diagram as shown in Figure 5, specifically comprises:
Acquiring unit 501, in the time existing with the image-region of destination object images match in the predeterminable area of the previous frame image of current frame image, obtains the coordinate position of this image-region in previous frame image;
Detecting unit 502, detects for comprising at current frame image the image-region whether existing with destination object images match in the first setting regions of this coordinate position obtaining; This first setting regions is positioned at predeterminable area;
Determining unit 503, in the time existing with the image-region of destination object images match in the first setting regions, determines the image-region existing in the predeterminable area of current frame image with destination object images match.
Further, determining unit 503, also for when not existing in the first setting regions and the image-region of destination object images match, while there is not foreground image in the predeterminable area of current frame image, determine the image-region not existing in the predeterminable area of current frame image with destination object images match.
Further, detecting unit 502, also for working as the image-region not existing in the first setting regions with destination object images match, while there is foreground image in the predeterminable area of current frame image, in current frame image, comprise and in the second setting regions of this foreground image, detect the image-region whether existing with destination object images match; This second setting regions is positioned at predeterminable area; In the time existing with the image-region of destination object images match in the second setting regions, determine the image-region existing in the predeterminable area of current frame image with destination object images match.
Further, determining unit 503, also in the time not existing with the image-region of destination object images match in the second setting regions, determines the image-region not existing in the predeterminable area of current frame image with destination object images match.
Further, detecting unit 502, specifically for detecting whether there is the image-region that is greater than texture similarity threshold with the gray scale texture similarity of destination object image in the first setting regions; In the time that existence in the first setting regions is greater than the image-region of texture similarity threshold with the gray scale texture similarity of destination object image, determine the image-region of the interior existence of the first setting regions and destination object images match; When not existing in the first setting regions while being greater than the image-region of texture similarity threshold with the gray scale texture similarity of destination object image, determine the image-region not existing in the first setting regions with destination object images match; Or
Detect the image-region that image similarity is greater than image similarity threshold value of cutting apart whether existing in the first setting regions with destination object image; In the time existing with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of destination object image in the first setting regions, determine the image-region existing in the first setting regions with destination object images match; In the time not existing with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of destination object image in the first setting regions, determine the image-region not existing in the first setting regions with destination object images match; Or
Detect in the first setting regions and whether exist with the gray scale texture similarity of destination object image and be greater than texture similarity threshold, and with the image-region that image similarity is greater than image similarity threshold value of cutting apart of destination object image; Be greater than texture similarity threshold when existing in the first setting regions with the gray scale texture similarity of destination object image, and during with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of destination object image, determine the image-region existing in the first setting regions with destination object images match; When not existing the gray scale texture similarity with destination object image to be greater than texture similarity threshold in the first setting regions, and during with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of destination object image, determine the image-region not existing in the first setting regions with destination object images match.
Further, acquiring unit 501, specifically in the time existing with definite image-region of destination object images match in the predeterminable area of the previous frame image of current frame image, obtains this and determines the coordinate position of image-region in previous frame image; When not existing in the predeterminable area of the previous frame image of current frame image and definite image-region of destination object images match, and, obtain this coordinate position of latent image region in previous frame image when in the latent image region of the interior existence of the predeterminable area of the previous frame image of current frame image and destination object images match; Wherein, with definite image-region of destination object images match be the image-region that meets multiple matching conditions, with the latent image region of destination object images match be the image-region that only meets one of the plurality of matching condition;
Detecting unit 502, in the first setting regions of this coordinate position obtaining, detect specifically for comprising the definite image-region whether existing with destination object images match in current frame image, and whether exist and the latent image region of destination object images match;
Determining unit 503, specifically in the time existing with definite image-region of destination object images match in the first setting regions, determines the definite image-region existing in the predeterminable area of current frame image with destination object images match; In the time existing with the latent image region of destination object images match in the first setting regions, determine the latent image region existing in the predeterminable area of current frame image with destination object images match.
In sum, the scheme that the embodiment of the present invention provides, in the time existing with the image-region of destination object images match in the predeterminable area of the previous frame image of current frame image, obtains the coordinate position of this image-region in previous frame image; In current frame image, comprise and in the first setting regions of this coordinate position obtaining, detect the image-region whether existing with destination object images match; This first setting regions is positioned at predeterminable area; In the time existing with the image-region of destination object images match in the first setting regions, determine the image-region existing in the predeterminable area of current frame image with destination object images match.Adopt scheme provided by the invention, solved the lower problem of efficiency in the time that destination object image is detected.
The destination object image detection device that the application's embodiment provides can be realized by computer program.Those skilled in the art should be understood that; above-mentioned Module Division mode is only the one in numerous Module Division modes; if be divided into other modules or do not divide module, as long as destination object image detection device has above-mentioned functions, all should be within the application's protection domain.
The application is with reference to describing according to process flow diagram and/or the block scheme of the method for the embodiment of the present application, equipment (system) and computer program.Should understand can be by the flow process in each flow process in computer program instructions realization flow figure and/or block scheme and/or square frame and process flow diagram and/or block scheme and/or the combination of square frame.Can provide these computer program instructions to the processor of multi-purpose computer, special purpose computer, Embedded Processor or other programmable data processing device to produce a machine, the instruction that makes to carry out by the processor of computing machine or other programmable data processing device produces the device for realizing the function of specifying at flow process of process flow diagram or multiple flow process and/or square frame of block scheme or multiple square frame.
These computer program instructions also can be stored in energy vectoring computer or the computer-readable memory of other programmable data processing device with ad hoc fashion work, the instruction that makes to be stored in this computer-readable memory produces the manufacture that comprises command device, and this command device is realized the function of specifying in flow process of process flow diagram or multiple flow process and/or square frame of block scheme or multiple square frame.
These computer program instructions also can be loaded in computing machine or other programmable data processing device, make to carry out sequence of operations step to produce computer implemented processing on computing machine or other programmable devices, thereby the instruction of carrying out is provided for realizing the step of the function of specifying in flow process of process flow diagram or multiple flow process and/or square frame of block scheme or multiple square frame on computing machine or other programmable devices.
Obviously, those skilled in the art can carry out various changes and modification and not depart from the spirit and scope of the present invention the present invention.Like this, if within of the present invention these are revised and modification belongs to the scope of the claims in the present invention and equivalent technologies thereof, the present invention is also intended to comprise these changes and modification interior.
Claims (12)
1. a destination object image detecting method, is characterized in that, comprising:
In the time existing with the image-region of destination object images match in the predeterminable area of the previous frame image of current frame image, obtain the coordinate position of described image-region in described previous frame image;
In described current frame image, comprise and in the first setting regions of the described coordinate position obtaining, detect the image-region whether existing with described destination object images match; Described the first setting regions is positioned at described predeterminable area;
In the time existing with the image-region of described destination object images match in described the first setting regions, determine the image-region existing in the predeterminable area of described current frame image with described destination object images match.
2. the method for claim 1, is characterized in that, also comprises:
When not existing in described the first setting regions and the image-region of described destination object images match, while there is not foreground image in the predeterminable area of described current frame image, determine the image-region not existing in the predeterminable area of described current frame image with described destination object images match.
3. method as claimed in claim 2, is characterized in that, also comprises:
When not existing in described the first setting regions and the image-region of described destination object images match, while there is foreground image in the predeterminable area of described current frame image, in described current frame image, comprise and in the second setting regions of described foreground image, detect the image-region whether existing with described destination object images match; Described the second setting regions is positioned at described predeterminable area;
In the time existing with the image-region of described destination object images match in described the second setting regions, determine the image-region existing in the predeterminable area of described current frame image with described destination object images match.
4. method as claimed in claim 3, is characterized in that, also comprises:
In the time not existing with the image-region of described destination object images match in described the second setting regions, determine the image-region not existing in the predeterminable area of described current frame image with described destination object images match.
5. the method for claim 1, is characterized in that, the concrete image-region whether existing with described destination object images match that detects in described the first setting regions in the following way:
Detect and in described the first setting regions, whether have the image-region that is greater than texture similarity threshold with the gray scale texture similarity of described destination object image; In the time that existence in described the first setting regions is greater than the image-region of texture similarity threshold with the gray scale texture similarity of described destination object image, determine the image-region of the interior existence of described the first setting regions and described destination object images match; When not existing in described the first setting regions while being greater than the image-region of texture similarity threshold with the gray scale texture similarity of described destination object image, determine the image-region not existing in described the first setting regions with described destination object images match; Or
Detect the image-region that image similarity is greater than image similarity threshold value of cutting apart whether existing in described the first setting regions with described destination object image; In the time existing with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of described destination object image in described the first setting regions, determine the image-region existing in described the first setting regions with described destination object images match; In the time not existing with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of described destination object image in described the first setting regions, determine the image-region not existing in described the first setting regions with described destination object images match; Or
Detect in described the first setting regions and whether exist with the gray scale texture similarity of described destination object image and be greater than texture similarity threshold, and with the image-region that image similarity is greater than image similarity threshold value of cutting apart of described destination object image; Be greater than texture similarity threshold when existing in described the first setting regions with the gray scale texture similarity of described destination object image, and during with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of described destination object image, determine the image-region existing in described the first setting regions with described destination object images match; When not existing the gray scale texture similarity with described destination object image to be greater than texture similarity threshold in described the first setting regions, and during with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of described destination object image, determine the image-region not existing in described the first setting regions with described destination object images match.
6. the method for claim 1, it is characterized in that, in the time existing with the image-region of destination object images match in the predeterminable area of the previous frame image of current frame image, obtain the coordinate position of described image-region in described previous frame image, specifically comprise:
In the time existing with definite image-region of destination object images match in the predeterminable area of the previous frame image of current frame image, obtain the coordinate position of described definite image-region in described previous frame image; When not existing in the predeterminable area of the previous frame image of current frame image and definite image-region of destination object images match, and, obtain the coordinate position of described latent image region in described previous frame image when in the latent image region of the interior existence of the predeterminable area of the previous frame image of current frame image and destination object images match;
Wherein, described and definite image-region destination object images match is the image-region that meets multiple matching conditions, and described and latent image region destination object images match is the image-region that only meets one of described multiple matching conditions;
In described current frame image, comprise in the first setting regions of the described coordinate position obtaining and detect the image-region whether existing with described destination object images match, specifically comprise:
In described current frame image, comprise and in the first setting regions of the described coordinate position obtaining, detect the definite image-region whether existing with described destination object images match, and whether exist and the latent image region of described destination object images match;
In the time existing with the image-region of described destination object images match in described the first setting regions, determine the image-region existing in the predeterminable area of described current frame image with described destination object images match, specifically comprise:
In the time existing with definite image-region of described destination object images match in described the first setting regions, determine the definite image-region existing in the predeterminable area of described current frame image with described destination object images match;
In the time existing with the latent image region of described destination object images match in described the first setting regions, determine the latent image region existing in the predeterminable area of described current frame image with described destination object images match.
7. a destination object image detection device, is characterized in that, comprising:
Acquiring unit, in the time existing with the image-region of destination object images match in the predeterminable area of the previous frame image of current frame image, obtains the coordinate position of described image-region in described previous frame image;
Detecting unit, for detecting the image-region whether existing with described destination object images match in the first setting regions that comprises the described coordinate position obtaining at described current frame image; Described the first setting regions is positioned at described predeterminable area;
Determining unit, in the time existing with the image-region of described destination object images match in described the first setting regions, determines the image-region existing in the predeterminable area of described current frame image with described destination object images match.
8. device as claimed in claim 7, it is characterized in that, described determining unit, also for working as the image-region not existing in described the first setting regions with described destination object images match, while there is not foreground image in the predeterminable area of described current frame image, determine the image-region not existing in the predeterminable area of described current frame image with described destination object images match.
9. device as claimed in claim 8, it is characterized in that, described detecting unit, also for working as the image-region not existing in described the first setting regions with described destination object images match, while there is foreground image in the predeterminable area of described current frame image, in described current frame image, comprise and in the second setting regions of described foreground image, detect the image-region whether existing with described destination object images match; Described the second setting regions is positioned at described predeterminable area; In the time existing with the image-region of described destination object images match in described the second setting regions, determine the image-region existing in the predeterminable area of described current frame image with described destination object images match.
10. device as claimed in claim 9, it is characterized in that, described determining unit, also in the time not existing with the image-region of described destination object images match in described the second setting regions, determine the image-region not existing in the predeterminable area of described current frame image with described destination object images match.
11. devices as claimed in claim 7, is characterized in that, described detecting unit, specifically for detecting whether there is the image-region that is greater than texture similarity threshold with the gray scale texture similarity of described destination object image in described the first setting regions; In the time that existence in described the first setting regions is greater than the image-region of texture similarity threshold with the gray scale texture similarity of described destination object image, determine the image-region of the interior existence of described the first setting regions and described destination object images match; When not existing in described the first setting regions while being greater than the image-region of texture similarity threshold with the gray scale texture similarity of described destination object image, determine the image-region not existing in described the first setting regions with described destination object images match; Or
Detect the image-region that image similarity is greater than image similarity threshold value of cutting apart whether existing in described the first setting regions with described destination object image; In the time existing with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of described destination object image in described the first setting regions, determine the image-region existing in described the first setting regions with described destination object images match; In the time not existing with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of described destination object image in described the first setting regions, determine the image-region not existing in described the first setting regions with described destination object images match; Or
Detect in described the first setting regions and whether exist with the gray scale texture similarity of described destination object image and be greater than texture similarity threshold, and with the image-region that image similarity is greater than image similarity threshold value of cutting apart of described destination object image; Be greater than texture similarity threshold when existing in described the first setting regions with the gray scale texture similarity of described destination object image, and during with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of described destination object image, determine the image-region existing in described the first setting regions with described destination object images match; When not existing the gray scale texture similarity with described destination object image to be greater than texture similarity threshold in described the first setting regions, and during with the cutting apart image similarity and be greater than the image-region of image similarity threshold value of described destination object image, determine the image-region not existing in described the first setting regions with described destination object images match.
12. devices as claimed in claim 7, it is characterized in that, described acquiring unit, specifically in the time existing with definite image-region of destination object images match in the predeterminable area of the previous frame image of current frame image, obtain the coordinate position of described definite image-region in described previous frame image; When not existing in the predeterminable area of the previous frame image of current frame image and definite image-region of destination object images match, and, obtain the coordinate position of described latent image region in described previous frame image when in the latent image region of the interior existence of the predeterminable area of the previous frame image of current frame image and destination object images match; Wherein, described and definite image-region destination object images match is the image-region that meets multiple matching conditions, and described and latent image region destination object images match is the image-region that only meets one of described multiple matching conditions;
Described detecting unit, in the first setting regions of the described coordinate position obtaining, detect specifically for comprising the definite image-region whether existing with described destination object images match in described current frame image, and whether exist and the latent image region of described destination object images match;
Described determining unit, specifically in the time existing with definite image-region of described destination object images match in described the first setting regions, determines the definite image-region existing in the predeterminable area of described current frame image with described destination object images match; In the time existing with the latent image region of described destination object images match in described the first setting regions, determine the latent image region existing in the predeterminable area of described current frame image with described destination object images match.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210463919.5A CN103810696B (en) | 2012-11-15 | 2012-11-15 | Method for detecting image of target object and device thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210463919.5A CN103810696B (en) | 2012-11-15 | 2012-11-15 | Method for detecting image of target object and device thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103810696A true CN103810696A (en) | 2014-05-21 |
CN103810696B CN103810696B (en) | 2017-03-22 |
Family
ID=50707417
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210463919.5A Active CN103810696B (en) | 2012-11-15 | 2012-11-15 | Method for detecting image of target object and device thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103810696B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105989348A (en) * | 2014-12-09 | 2016-10-05 | 由田新技股份有限公司 | Detection method and system for using handheld device by person |
CN106686308A (en) * | 2016-12-28 | 2017-05-17 | 平安科技(深圳)有限公司 | Image focal length detection method and device |
CN107920223A (en) * | 2016-10-08 | 2018-04-17 | 杭州海康威视数字技术股份有限公司 | A kind of object behavior detection method and device |
CN108109132A (en) * | 2016-11-25 | 2018-06-01 | 杭州海康威视数字技术股份有限公司 | A kind of image analysis method and device |
CN108388879A (en) * | 2018-03-15 | 2018-08-10 | 斑马网络技术有限公司 | Mesh object detection method, device and storage medium |
CN111723603A (en) * | 2019-03-19 | 2020-09-29 | 杭州海康威视数字技术股份有限公司 | Material monitoring method, system and device |
CN111898581A (en) * | 2020-08-12 | 2020-11-06 | 成都佳华物链云科技有限公司 | Animal detection method, device, electronic equipment and readable storage medium |
CN113066121A (en) * | 2019-12-31 | 2021-07-02 | 深圳迈瑞生物医疗电子股份有限公司 | Image analysis system and method for identifying repeat cells |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1912950A (en) * | 2006-08-25 | 2007-02-14 | 浙江工业大学 | Device for monitoring vehicle breaking regulation based on all-position visual sensor |
CN101305616A (en) * | 2005-09-09 | 2008-11-12 | 索尼株式会社 | Image processing device and method, program, and recording medium |
CN101673403A (en) * | 2009-10-10 | 2010-03-17 | 安防制造(中国)有限公司 | Target following method in complex interference scene |
CN102087747A (en) * | 2011-01-05 | 2011-06-08 | 西南交通大学 | Object tracking method based on simplex method |
CN102567705A (en) * | 2010-12-23 | 2012-07-11 | 北京邮电大学 | Method for detecting and tracking night running vehicle |
-
2012
- 2012-11-15 CN CN201210463919.5A patent/CN103810696B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101305616A (en) * | 2005-09-09 | 2008-11-12 | 索尼株式会社 | Image processing device and method, program, and recording medium |
CN1912950A (en) * | 2006-08-25 | 2007-02-14 | 浙江工业大学 | Device for monitoring vehicle breaking regulation based on all-position visual sensor |
CN101673403A (en) * | 2009-10-10 | 2010-03-17 | 安防制造(中国)有限公司 | Target following method in complex interference scene |
CN102567705A (en) * | 2010-12-23 | 2012-07-11 | 北京邮电大学 | Method for detecting and tracking night running vehicle |
CN102087747A (en) * | 2011-01-05 | 2011-06-08 | 西南交通大学 | Object tracking method based on simplex method |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105989348A (en) * | 2014-12-09 | 2016-10-05 | 由田新技股份有限公司 | Detection method and system for using handheld device by person |
CN107920223A (en) * | 2016-10-08 | 2018-04-17 | 杭州海康威视数字技术股份有限公司 | A kind of object behavior detection method and device |
CN108109132A (en) * | 2016-11-25 | 2018-06-01 | 杭州海康威视数字技术股份有限公司 | A kind of image analysis method and device |
US11048950B2 (en) | 2016-11-25 | 2021-06-29 | Hangzhou Hikvision Digital Technology Co., Ltd. | Method and device for processing images of vehicles |
CN106686308A (en) * | 2016-12-28 | 2017-05-17 | 平安科技(深圳)有限公司 | Image focal length detection method and device |
CN106686308B (en) * | 2016-12-28 | 2018-02-16 | 平安科技(深圳)有限公司 | Image focal length detection method and device |
CN108388879A (en) * | 2018-03-15 | 2018-08-10 | 斑马网络技术有限公司 | Mesh object detection method, device and storage medium |
CN108388879B (en) * | 2018-03-15 | 2022-04-15 | 斑马网络技术有限公司 | Target detection method, device and storage medium |
CN111723603A (en) * | 2019-03-19 | 2020-09-29 | 杭州海康威视数字技术股份有限公司 | Material monitoring method, system and device |
CN113066121A (en) * | 2019-12-31 | 2021-07-02 | 深圳迈瑞生物医疗电子股份有限公司 | Image analysis system and method for identifying repeat cells |
CN111898581A (en) * | 2020-08-12 | 2020-11-06 | 成都佳华物链云科技有限公司 | Animal detection method, device, electronic equipment and readable storage medium |
CN111898581B (en) * | 2020-08-12 | 2024-05-17 | 成都佳华物链云科技有限公司 | Animal detection method, apparatus, electronic device, and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN103810696B (en) | 2017-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110210302B (en) | Multi-target tracking method, device, computer equipment and storage medium | |
CN103810696A (en) | Method for detecting image of target object and device thereof | |
US10417503B2 (en) | Image processing apparatus and image processing method | |
CN107358149B (en) | Human body posture detection method and device | |
US20200082549A1 (en) | Efficient object detection and tracking | |
CN108182396B (en) | Method and device for automatically identifying photographing behavior | |
KR101394242B1 (en) | A method for monitoring a video and an apparatus using it | |
CN110853076A (en) | Target tracking method, device, equipment and storage medium | |
CN111368615B (en) | Illegal building early warning method and device and electronic equipment | |
CN104049760B (en) | The acquisition methods and system of a kind of man-machine interaction order | |
CN111798487B (en) | Target tracking method, apparatus and computer readable storage medium | |
CN109255802B (en) | Pedestrian tracking method, device, computer equipment and storage medium | |
US10945888B2 (en) | Intelligent blind guide method and apparatus | |
CN111382637B (en) | Pedestrian detection tracking method, device, terminal equipment and medium | |
US9367747B2 (en) | Image processing apparatus, image processing method, and program | |
CN110557628A (en) | Method and device for detecting shielding of camera and electronic equipment | |
CN112184773A (en) | Helmet wearing detection method and system based on deep learning | |
CN108288025A (en) | A kind of car video monitoring method, device and equipment | |
CN111582077A (en) | Safety belt wearing detection method and device based on artificial intelligence software technology | |
CN111950523A (en) | Ship detection optimization method and device based on aerial photography, electronic equipment and medium | |
CN113688820A (en) | Stroboscopic stripe information identification method and device and electronic equipment | |
CN104063041A (en) | Information processing method and electronic equipment | |
CN106803937B (en) | Double-camera video monitoring method, system and monitoring device with text log | |
CN113286086A (en) | Camera use control method and device, electronic equipment and storage medium | |
CN112101134A (en) | Object detection method and device, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |