CN109255349A - Object detection method, device and image processing equipment - Google Patents

Object detection method, device and image processing equipment Download PDF

Info

Publication number
CN109255349A
CN109255349A CN201810554618.0A CN201810554618A CN109255349A CN 109255349 A CN109255349 A CN 109255349A CN 201810554618 A CN201810554618 A CN 201810554618A CN 109255349 A CN109255349 A CN 109255349A
Authority
CN
China
Prior art keywords
highlight regions
detection zone
detection
zone
detecting device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810554618.0A
Other languages
Chinese (zh)
Other versions
CN109255349B (en
Inventor
白向晖
杨雅文
谭志明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Publication of CN109255349A publication Critical patent/CN109255349A/en
Application granted granted Critical
Publication of CN109255349B publication Critical patent/CN109255349B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

A kind of object detection method, device and image processing equipment, wherein the object detection method comprises determining that the second detection zone;Detect the second highlight regions in second detection zone;And second highlight regions are filtered, obtain the target in second detection zone.The embodiment of the present invention is by detecting the highlight regions (referred to as the second highlight regions) in detection zone (referred to as the second detection zone), it determines the target in detection zone (referred to as the second detection zone), improves the accuracy of the target detection in the case where illumination condition is bad.

Description

Object detection method, device and image processing equipment
Technical field
The present invention relates to technical field of image processing, in particular to a kind of object detection method, device and image procossing are set It is standby.
Background technique
With the development of information technology, applied more and more widely based on the target detection technique of image.For example, handing over Logical monitoring field can carry out target detection for video monitoring image, to identify the targets such as specific vehicle, and in turn It realizes to functions such as the identifications, tracking, control of target.
It should be noted that the above description of the technical background be intended merely to it is convenient to technical solution of the present invention carry out it is clear, Complete explanation, and facilitate the understanding of those skilled in the art and illustrate.Cannot merely because these schemes of the invention Background technology part is expounded and thinks that above-mentioned technical proposal is known to those skilled in the art.
Summary of the invention
Inventors have found that when the targets such as vehicle enter tunnel or in driving at night, since illumination condition is bad, sight It is bad, greatly reduce the accuracy of target detection.
To solve the above-mentioned problems, the embodiment of the present invention provides a kind of object detection method, device and image processing equipment.
According to a first aspect of the embodiments of the present invention, a kind of object detection method is provided, wherein the described method includes:
Determine the second detection zone;
Detect the second highlight regions in second detection zone;And
Second highlight regions are filtered, the target in second detection zone is obtained.
According to a second aspect of the embodiments of the present invention, a kind of object detecting device is provided, wherein described device includes:
Determination unit determines the second detection zone;
First detection unit detects the second highlight regions in second detection zone;And
Filter element is filtered second highlight regions, obtains the target in second detection zone.
According to a third aspect of the embodiments of the present invention, a kind of image processing equipment is provided, wherein described image processing is set Standby includes object detecting device described in aforementioned second aspect.
According to a fourth aspect of the embodiments of the present invention, a kind of computer-readable program is provided, wherein when in target detection When executing described program in device or image processing equipment, described program makes the object detecting device or image processing equipment Execute object detection method described in the first aspect of the embodiment of the present invention.
According to a fifth aspect of the embodiments of the present invention, a kind of storage medium for being stored with computer-readable program is provided, Wherein the computer-readable program makes object detecting device or image processing equipment execute the first party of the embodiment of the present invention Object detection method described in face.
The beneficial effect of the embodiment of the present invention is: the embodiment of the present invention passes through to detection zone (referred to as the second detection zone Domain) in highlight regions (referred to as the second highlight regions) detected, determine in detection zone (referred to as the second detection zone) Target improves the accuracy of the target detection in the case where illumination condition is bad.When the embodiment of the present invention is applied to vehicle When detection, using vehicle in the lighting characteristics of the bad place of the illumination conditions such as tunnel headlamp when driving, can effectively it examine The vehicle is measured, the accuracy of target detection is improved.
Referring to following description and accompanying drawings, only certain exemplary embodiments of this invention is disclosed in detail, specifies original of the invention Reason can be in a manner of adopted.It should be understood that embodiments of the present invention are not so limited in range.In appended power In the range of the clause that benefit requires, embodiments of the present invention include many changes, modifications and are equal.
The feature for describing and/or showing for a kind of embodiment can be in a manner of same or similar one or more It uses in a other embodiment, is combined with the feature in other embodiment, or the feature in substitution other embodiment.
It should be emphasized that term "comprises/comprising" refers to the presence of feature, one integral piece, step or component when using herein, but simultaneously It is not excluded for the presence or additional of one or more other features, one integral piece, step or component.
Detailed description of the invention
The embodiment of the present invention elements and features described in one drawing or one embodiment can with one or Elements and features shown in more other attached drawings or embodiment combine.In addition, in the accompanying drawings, similar label indicates Corresponding component in several attached drawings, and may be used to indicate corresponding component used in more than one embodiment.
Included attached drawing is used to provide to be further understood from the embodiment of the present invention, and which constitute one of specification Point, for illustrating embodiments of the present invention, and come together to illustrate the principle of the present invention with verbal description.Under it should be evident that Attached drawing in the description of face is only some embodiments of the present invention, for those of ordinary skill in the art, is not paying wound Under the premise of the property made is laborious, it is also possible to obtain other drawings based on these drawings.In the accompanying drawings:
Fig. 1 is the schematic diagram of the object detection method of embodiment 1;
Fig. 2 is the schematic diagram that the second detection zone is determined in the object detection method of embodiment 1;
Fig. 3 is the schematic diagram being filtered in the object detection method of embodiment 1 to the second highlight regions;
Fig. 4 is the schematic diagram of the second highlight regions;
Fig. 5 is the schematic diagram of the convex closure of the second highlight regions of Fig. 4;
Fig. 6-Figure 10 is the schematic diagram of an implement scene of the object detection method of embodiment 1;
Figure 11 is the schematic diagram of the object detecting device of embodiment 2;
Figure 12 is the schematic diagram of the determination unit of the object detecting device of embodiment 2;
Figure 13 is the schematic diagram of the filter element of the object detecting device of embodiment 2;
Figure 14 is the schematic diagram of the image processing equipment of embodiment 3.
Specific embodiment
Referring to attached drawing, by following specification, aforementioned and other feature of the invention be will be apparent.In specification In attached drawing, only certain exemplary embodiments of this invention is specifically disclosed, which show the portions that can wherein use principle of the invention Divide embodiment, it will thus be appreciated that the present invention is not limited to described embodiments, on the contrary, the present invention includes falling into appended power Whole modifications, modification and equivalent in the range of benefit requirement.
In embodiments of the present invention, term " first ", " second " etc. are used to distinguish different elements from appellation, but It is not offered as space arrangement or the time sequencing etc. of these elements, these elements should not be limited by these terms.Term " and/ Or " include the associated term listed one kind or any one of multiple and all combinations.Term "comprising", " comprising ", " having " etc. refers to the presence of stated feature, element, element or component, but presence or addition one or more is not precluded Other features, element, element or component.
In embodiments of the present invention, singular " one ", "the" etc. include plural form, should be broadly interpreted as "an" Or " one kind " and be not defined as "one" meaning;Furthermore term " described " be interpreted as both include singular and also including Plural form, unless the context clearly indicates otherwise.Furthermore term " according to " is interpreted as " based in part on ... ", term " being based on " is interpreted as " being at least partially based on ... ", unless the context clearly indicates otherwise.
The various embodiments of the embodiment of the present invention are illustrated with reference to the accompanying drawing.These embodiments are example Property, it is not limitation of the present invention.
Embodiment 1
A kind of object detection method is present embodiments provided, Fig. 1 is the schematic diagram of this method, please refers to Fig. 1, this method packet It includes:
Step 101: determining the second detection zone;
Step 102: the second highlight regions in detection second detection zone;
Step 103: second highlight regions being filtered, the target in second detection zone is obtained.
It in the present embodiment, include the target to be detected in detection zone, by the highlight bar in the detection zone The highlight regions inconsistent with target signature are got rid of in the detection in domain, and remaining highlight regions correspond to the target to be detected, by This available target to be detected.Also, target is determined by the detection to highlight regions, improve illumination condition not The accuracy of target detection in the case where good.
In the present embodiment, for convenience of explanation, the detection zone in step 101-103 is known as " the second detection zone Highlight regions in step 101-103 are known as " the second highlight regions " by domain ".
In the present embodiment, with no restriction to the determination method of detection zone in step 101 (the second detection zone), one In a embodiment, it can will be set as detection zone (the second detection zone) comprising detection mesh target area, in another reality It applies in mode, the detection zone (the second detection zone) can be determined according to reference zone.
Fig. 2 gives the schematic diagram of an embodiment of step 101, referring to figure 2., this method comprises:
Step 201: determining the first detection zone;
Step 202: the first reference zone is determined according to first detection zone;
Step 203: the first highlight regions in detection first reference zone;And
Step 204: first detection zone being updated using first highlight regions, obtains the second detection zone Domain.
In step 201, the first detection zone can arbitrarily be set, such as the lane region that vehicle is travelled is set as First detection zone determines the second detection zone as step by the method for present embodiment from first detection zone 101 detection zone.
In step 202, the first reference zone is determined according to the first detection zone, can be and the first reference area Domain is adjacent and the region that can be irradiated to of illuminator of target that is located in the first reference zone.For example, when the first detection zone When domain is the lane region of vehicle driving, which can be the region adjacent with lane, and the adjacent region It can be the region, such as the wall in tunnel etc. of the headlamp irradiation of vehicle.Above only to illustrate, which can also With not adjacent with the first detection zone, as long as the illuminator of the target in the first detection zone can be irradiated to first reference area Domain.
In the present embodiment, above-mentioned second can be found by the first highlight regions in the first reference zone of detection Detection zone.
In step 203, brightness value in the first reference zone can be greater than to the pixel of first threshold as described first Pixel in highlight regions obtains first highlight regions.
It in step 204, can be by the side of the up-and-down boundary of the first highlight regions and its extended line and the first detection zone The region being located in the first detection zone that boundary obtains after being intersected is as above-mentioned second detection zone.
As a result, by the processing of present embodiment, the range namely detection zone for reducing detection zone are detected from first Area update improves the accuracy of target detection for the second detection zone, and reduces the calculation amount of target detection.
The determination method for the detection zone that Fig. 2 is provided only is illustrated, but the present embodiment is not in this, as limitation.
It in the present embodiment, can be with the detection method phase of step 203 to the detection method of the highlight regions of step 102 Together, that is, the pixel that brightness value in the second detection zone is greater than second threshold is obtained as the pixel in the second highlight regions Second highlight regions.
In the present embodiment, first threshold and second threshold may be the same or different.
In the present embodiment, the highlight regions (the second highlight regions) in the second detection zone have been obtained by step 102 Later, can filtering by step 103 to the second highlight regions, get rid of the highlight regions inconsistent with clarification of objective (the second highlight regions) obtain the second highlight regions corresponding with target, so that it is determined that target.
Fig. 3 gives the schematic diagram of an embodiment of step 103, as shown in figure 3, this method comprises:
Step 301: first time filtering being carried out to second highlight regions according to the shape of second highlight regions, is gone Fall the second highlight regions that shape does not meet predetermined condition;
Step 302: carrying out second according to second highlight regions of the centre coordinate of the second highlight regions of reservation to reservation Secondary filtering removes the absolute value of the difference of horizontal coordinate is less than that vertical coordinate is small in two the second luminance areas of third threshold value the Two highlight regions;And
Step 303: matching area pair is searched from the second highlight regions further retained, according to matching area to obtaining Target in second detection zone.
In step 301, predetermined condition includes any of the following or any combination:
The area of second highlight regions is in the first preset range;
The circularity of second highlight regions is in the second preset range;
The convexity of second highlight regions is in third preset range.
That is, when some the second highlight regions area is not in the first preset range and/or circularity is not second In preset range and/or when convexity is not in third preset range, which is filtered out.
In the present embodiment, circularity is directly proportional to the area of the second highlight regions and with the perimeter of the second highlight regions It square is inversely proportional, such as can be with is defined as:
In the present embodiment, convexity can be with is defined as:
Solid line in Fig. 4 shows the schematic diagram of some the second highlight regions, and it is second highlighted that the dotted line in Fig. 5 shows this The convex closure in region.
In step 302, the centre coordinate for each of remaining the second highlight regions in step 301 can be calculated, Including horizontal coordinate (abscissa, also referred to as x-axis) and vertical coordinate (ordinate, also referred to as y-axis), then compared two-by-two, For the second highlight regions of every two, if the absolute value of the difference of the horizontal coordinate of the two the second highlight regions is less than third threshold Value, that is to say, that the centre coordinate of the two the second highlight regions is roughly the same in vertical direction, then relatively by vertical coordinate Small the second highlight regions (the second following highlight regions) filter out, and retain relatively large second highlight bar of vertical coordinate Domain (the second highlight regions above).
In step 303, for the second highlight regions remained in step 302, matching area can therefrom be searched It is right, namely therefrom search and meet matching area to the second highlight regions of condition, the second highlight regions matched then correspond to The target to be detected.
In the present embodiment, above-mentioned matching area is following any one or any combination to condition, that is, above-mentioned With region to any one or any combination for meeting the following conditions:
The absolute value of the difference of the vertical coordinate of two the second highlight regions is less than the 4th threshold value;
The area ratio of two the second highlight regions is in the 4th preset range;
The shape similarity of two the second highlight regions is in the 5th preset range.
In the present embodiment, the absolute value of the difference of the vertical coordinate of two the second highlight regions is less than the 4th threshold value, That is the centre coordinate of the two the second highlight regions is roughly the same in the horizontal direction, the same target may be belonged to.
In the present embodiment, the area ratio of two the second highlight regions is in the 4th preset range, that is to say, that this two The sizableness of a second highlight regions, it is also possible to belong to the same target.In the present embodiment, area ratio refers to biggish Area is divided by lesser area.For example, if the area of the second highlight regions A is greater than the area of the second highlight regions B, area Than referring to:
In the present embodiment, the shape similarity of two the second highlight regions is in the 5th preset range, that is to say, that The shape of the two the second highlight regions is suitable, it is also possible to belong to the same target.In the present embodiment, shape similarity can With is defined as:
Wherein, A and B is two the second highlight regions,For the Hu square of the second highlight regions A,For the second highlight regions The Hu square of B, i are the component of Hu square, share 7 components.
The present embodiment is found and mesh by being filtered twice to the second highlight regions detected from the second detection zone Mark corresponding second highlight regions, and then determine target, improve illumination condition it is bad in the case where target detection it is accurate Property.
In order to keep the method for the present embodiment more clear and easy to understand, carried out below with reference to method of the example to the present embodiment Explanation.In this example, detection target is the vehicle travelled in tunnel, and the vehicle is only one.Certainly, this example It is for example, the method for the present embodiment is suitable for any number of target detection in the bad place of illumination.
Fig. 6-Figure 10 is the schematic diagram of this exemplary detection scene.
As shown in fig. 6, region 601 be aforementioned first detection zone, can be it is predetermined, for example, in the tunnel Unilateral lane, be also possible to other predefined regions.
As shown in fig. 7, region 701 is aforementioned first reference zone, can be according to aforementioned first detection zone 601 and It is determining, it is also possible to predetermined, also, first reference zone 701 is adjacent with the first detection zone 601, detects mesh The headlamp of mark namely vehicle can be irradiated to first reference zone 701.
As shown in figure 8, region 801 is the first highlight regions detected from the first reference zone 701, it can be by preceding Step 203 is stated to obtain, that is, for the pixel being located in the first reference zone 701, if the brightness of the pixel is greater than first Threshold value, then it is assumed that the pixel is located in first highlight regions, thus obtains first highlight regions.
As shown in figure 9, line segment 901 is the extended line of the coboundary of the first highlight regions 801, line segment 902 is first highlighted The extended line of the lower boundary in region 801, the first highlight regions 801, extended line 901,902 intersect with the first detection zone 601 The region 903 arrived is the second detection zone.
The method of determination of Fig. 6-the second detection zone 903 shown in Fig. 9 is merely illustrative, as previously mentioned, can also root It according to other strategies or method to determine second detection zone 903, also, include to be detected in second detection zone 903 Target.
As shown in Figure 10, in the way of identical with the first highlight regions 801 are detected in the first reference zone 701, The second highlight regions 1001,1002,1003,1004 are detected in second detection zone 903, the mode of detection is as previously mentioned, herein It repeats no more.
As shown in Figure 10, after detecting the second highlight regions 1001,1002,1003,1004, to second highlight regions 1001,1002,1003,1004 progress are above-mentioned filters twice.
In first time filtering, remove the area in region not in the first preset range, the circularity in region it is not pre- second Determine in range, second highlight regions of the convexity in region not in third preset range, in the present embodiment, in first time mistake In filter, the second highlight regions 1001,1002,1003,1004 all have been retained, that is, the second highlight regions 1001, 1002,1003,1004 all meet above-mentioned predetermined condition.
In second of filtering, the centre coordinate of the second highlight regions 1001,1002,1003,1004 is calculated, due to second The x coordinate of highlight regions 1001 and 1003 is suitable, then removes 1003, due to the x coordinate phase of the second highlight regions 1002 and 1004 When then removing 1004.It is filtered as a result, by second, remains the second highlight regions 1001 and 1002.
Finally, the matching area pair for meeting matching area condition is searched from the second highlight regions 1001,1002 of reservation, Since the second highlight regions 1001 and 1002 meet matching area condition, then method through this embodiment only detects a pair of the Two highlight regions 1001,1002, then this pair of second highlight regions 1001,1002 corresponds to a target, such as in this implementation Its in mode corresponds to two headlamps of the vehicle.Method through this embodiment as a result, in the bad condition of illumination condition Under can also detect target, improve the accuracy of target detection.
It is elaborated above by an embodiment of the method for Fig. 6-10 pairs of the present embodiment, but as previously mentioned, Some steps are optionally that some steps can replace by other means, specifically as previously mentioned, details are not described herein again.
In the present embodiment, to first threshold~the 4th threshold value value and the first preset range~the 5th preset range Value with no restrictions, can determine, can also determine by other means, details are not described herein again based on experience value.
The method of the present embodiment is by the way that the highlight regions in detection zone (referred to as the second detection zone), (referred to as second is high Bright area) it is detected, it determines the target in detection zone (referred to as the second detection zone), improves bad in illumination condition In the case of target detection accuracy.When the method for the present embodiment is applied to vehicle detection, using vehicle in light such as tunnels According to the lighting characteristics of the bad place of condition headlamp when driving, the vehicle can be effectively detected out, improve target detection Accuracy.
Embodiment 2
A kind of object detecting device is present embodiments provided, the side of the principle and embodiment 1 that solve the problems, such as due to the device Method is similar, therefore its specific implementation can be not repeated to illustrate with the implementation of the method for reference implementation example 1, content something in common.
Figure 11 is the schematic diagram of the object detecting device 1100 of the present embodiment, as shown in figure 11, the object detecting device 1100 comprise determining that unit 1101, first detection unit 1102 and filter element 1103.The determination unit 1101 is for true Fixed second detection zone, the first detection unit 1102 are used to detect the second highlight regions in second detection zone, the mistake Filter unit 1103 obtains the target in the second detection zone for being filtered to the second highlight regions.Its specific implementation can With each step in referring to Fig.1, details are not described herein again.
In an embodiment of the present embodiment, as shown in figure 12, determination unit 1101 may include: first determining single First 1201, second determination unit 1202, second detection unit 1203 and updating unit 1204.First determination unit 1201 It can determine the first detection zone;Second determination unit 1202 can determine the first reference area according to first detection zone Domain;The second detection unit 1203 can detecte the first highlight regions in first reference zone;The updating unit 1204 It can use first highlight regions to be updated first detection zone, obtain the second detection zone.It is specific real Applying can be with reference to each step in Fig. 2, and details are not described herein again.
In the present embodiment, brightness value in first reference zone can be greater than first by second detection unit 1203 The pixel of threshold value obtains first highlight regions as the pixel in first highlight regions.
In the present embodiment, updating unit 1204 can be by the up-and-down boundary and its extended line of first highlight regions Described in the region being located in first detection zone obtained after being intersected with the boundary of first detection zone is used as Second detection zone.
In the present embodiment, brightness value in second detection zone can also be greater than second by first detection unit 1102 The pixel of threshold value obtains second highlight regions as the pixel in second highlight regions.
In an embodiment of the present embodiment, as shown in figure 13, filter element 1103 may include: the first filtering list First 1301, second filter element 1302 and searching unit 1303.First filter element 1301 can be highlighted according to described second The shape in region carries out first time filtering to second highlight regions, removes the second highlight bar that shape does not meet predetermined condition Domain;Second filter element can carry out the according to the second highlight regions of the centre coordinates of the second highlight regions of reservation to reservation It is second highlighted to be less than in two the second luminance areas of third threshold value that vertical coordinate is small for secondary filter, the difference for removing horizontal coordinate Region;Searching unit can search matching area pair from the second highlight regions further retained, according to matching area to To the target in second detection zone.It is embodied can be with reference to each step in Fig. 3, and details are not described herein again.
In the present embodiment, above-mentioned predetermined condition includes any of the following or any combination:
Area is in the first preset range;
Circularity is in the second preset range;
Convexity is in third preset range.
In the present embodiment, circularity it is directly proportional to the area of second highlight regions and with second highlight regions Perimeter square be inversely proportional.
In the present embodiment, convexity refers to the area of second highlight regions and the convex closure of second highlight regions Area ratio.
In the present embodiment, above-mentioned matching area is to any one or any combination for meeting the following conditions:
The absolute value of the difference of the vertical coordinate of two the second highlight regions is less than the 4th threshold value;
The area ratio of two the second highlight regions is in the 4th preset range;
The shape similarity of two the second highlight regions is in the 5th preset range.
In the present embodiment, shape similarity can indicate are as follows:
Wherein, A and B is two the second highlight regions,For the Hu square of the second highlight regions A,For the second highlight regions The Hu square of B, i are the component of Hu square.
The device of the present embodiment is by the way that the highlight regions in detection zone (referred to as the second detection zone), (referred to as second is high Bright area) it is detected, it determines the target in detection zone (referred to as the second detection zone), improves bad in illumination condition In the case of target detection accuracy.When the device of the present embodiment is applied to vehicle detection, using vehicle in light such as tunnels According to the lighting characteristics of the bad place of condition headlamp when driving, the vehicle can be effectively detected out, improve target detection Accuracy.
Embodiment 3
The present embodiment provides a kind of image processing equipment, which includes target inspection as described in Example 2 Survey device.
Figure 14 is the schematic diagram of the image processing equipment of the present embodiment.As shown in figure 14, image processing equipment 1400 can be with It include: central processing unit (CPU) 1401 and memory 1402;Memory 1402 is coupled to central processing unit 1401.Wherein this is deposited Reservoir 1402 can store various data;The program of information processing is additionally stored, and under the control of central processing unit 1401 Execute the program.
In one embodiment, the function of object detecting device 1100 can be integrated into central processing unit 1401. Wherein, central processing unit 1401, which can be configured as, realizes object detection method as described in Example 1.
In another embodiment, object detecting device 1100 can with 1401 separate configuration of central processing unit, such as Can configure object detecting device to the chip connecting with central processing unit 1401, by the control of central processing unit 1401 come Realize the function of object detecting device.
In the present embodiment, central processing unit 1401, which can be configured as, carries out following control: determining the second detection zone Domain;Detect the second highlight regions in second detection zone;And second highlight regions are filtered, obtain institute State the target in the second detection zone.
In addition, as shown in figure 14, image processing equipment 1400 can also include: input and output (I/O) equipment 1403 and show Show device 1404 etc.;Wherein, similarly to the prior art, details are not described herein again for the function of above-mentioned component.It is worth noting that, at image Reason equipment 1400 is also not necessary to include all components shown in Figure 14;In addition, image processing equipment 1400 can be with Including the component being not shown in Figure 14, the prior art can be referred to.
The embodiment of the present invention provides a kind of computer-readable program, wherein when in object detecting device or image processing equipment When middle execution described program, it is as described in Example 1 that described program executes the object detecting device or image processing equipment Object detection method.
The embodiment of the present invention provides a kind of storage medium for being stored with computer-readable program, wherein described computer-readable Program makes object detecting device or image processing equipment execute object detection method as described in Example 1.
The device and method more than present invention can be by hardware realization, can also be by combination of hardware software realization.The present invention It is related to such computer-readable program, when the program is performed by logical block, the logical block can be made to realize above The device or component parts, or the logical block is made to realize various method or steps described above.The invention further relates to For storing the storage medium of procedure above, such as hard disk, disk, CD, DVD, flash memory.
The software mould that hardware can be embodied directly in conjunction with the method, device that the embodiment of the present invention describes, executed by processor Block or both combination.For example, the one or more of one or more of functional block diagram and/or functional block diagram shown in Figure 11 It combines (for example, determination unit, first detection unit and filter element etc.), both can correspond to each of computer program process Software module can also correspond to each hardware module.These software modules can correspond respectively to each step shown in FIG. 1 Suddenly.These software modules are for example solidified using field programmable gate array (FPGA) and are realized by these hardware modules.
Software module can be located at RAM memory, flash memory, ROM memory, eprom memory, eeprom memory, post Storage, hard disk, mobile disk, CD-ROM or any other form known in the art storage medium.One kind can be deposited Storage media is coupled to processor, to enable a processor to from the read information, and can be written to the storage medium Information;Or the storage medium can be the component part of processor.Pocessor and storage media can be located in ASIC.This is soft Part module can store in a memory in the mobile terminal, also can store in the storage card that can be inserted into mobile terminal.For example, If equipment (such as mobile terminal) is using the MEGA-SIM card of larger capacity or the flash memory device of large capacity, the software mould Block is storable in the flash memory device of the MEGA-SIM card or large capacity.
It is combined for one or more of function box described in attached drawing and/or the one or more of function box, It can be implemented as general processor, digital signal processor (DSP), the dedicated integrated electricity for executing function described in the invention Road (ASIC), field programmable gate array (FPGA) either other programmable logic device, discrete gate or transistor logic device Part, discrete hardware components or it is any appropriately combined.One or more of function box for attached drawing description and/or function Can box one or more combinations, be also implemented as calculating the combination of equipment, for example, the combination of DSP and microprocessor, more A microprocessor communicates the one or more microprocessors or any other this configuration combined with DSP.
Combining specific embodiment above, invention has been described, it will be appreciated by those skilled in the art that this A little descriptions are all exemplary, and are not limiting the scope of the invention.Those skilled in the art can be according to the present invention Spirit and principle various variants and modifications are made to the present invention, these variants and modifications are also within the scope of the invention.
About the embodiment including above embodiments, following note is also disclosed:
It is attached 1, a kind of object detection method, wherein the described method includes:
Determine the second detection zone;
Detect the second highlight regions in second detection zone;And
Second highlight regions are filtered, the target in second detection zone is obtained.
Note 2, the object detection method according to note 1, wherein second detection zone of determination includes:
Determine the first detection zone;
The first reference zone is determined according to first detection zone;
Detect the first highlight regions in first reference zone;And
First detection zone is updated using first highlight regions, obtains the second detection zone.
Note 3, the object detection method according to note 2, wherein first in detection first reference zone is high Bright area, comprising:
Brightness value in first reference zone is greater than the pixel of first threshold as in first highlight regions Pixel obtains first highlight regions.
Note 4, the object detection method according to note 2, wherein using first highlight regions to described first Detection zone is updated, and obtains the second detection zone, comprising:
The up-and-down boundary of first highlight regions and its extended line and the boundary of first detection zone are subjected to phase It hands over, obtains second detection zone being located in first detection zone.
Note 5, the object detection method according to note 1, wherein second in detection second detection zone is high Bright area, comprising:
Brightness value in second detection zone is greater than the pixel of second threshold as in second highlight regions Pixel obtains second highlight regions.
Note 6, the object detection method according to note 1, wherein second highlight regions are filtered, are obtained To the target in second detection zone, comprising:
First time filtering is carried out to second highlight regions according to the shape of second highlight regions, removes shape not The second highlight regions to conform to a predetermined condition;
Second is carried out according to second highlight regions of the centre coordinate of the second highlight regions of reservation to reservation to filter, and is gone The absolute value of the difference for falling horizontal coordinate is less than the second highlight bar that vertical coordinate is small in two the second luminance areas of third threshold value Domain;And
Matching area pair is searched from the second highlight regions further retained, according to matching area to obtaining described second Target in detection zone.
Note 7, according to note 6 described in object detection method, wherein the predetermined condition include any of the following or Any combination:
Area is in the first preset range;
Circularity is in the second preset range;
Convexity is in third preset range.
Note 8, the object detection method according to note 7, wherein the face of the circularity and second highlight regions Product is directly proportional and square is inversely proportional with the perimeter of second highlight regions.
Note 9, the object detection method according to note 7, wherein the convexity refers to second highlight regions The area of area or the convex closure of second highlight regions.
Note 10, the object detection method according to note 6, wherein the matching area is to meeting the following conditions Any one or any combination:
The difference of the centre coordinate of two the second highlight regions is less than the 4th threshold value;
The area ratio of two the second highlight regions is in the 4th preset range;
The shape similarity of two the second highlight regions is in the 5th preset range;
Note 11, the object detection method according to note 10, wherein the shape similarity indicates are as follows:
Wherein, A and B is two the second highlight regions,For the Hu square of the second highlight regions A,For the second highlight regions The Hu square of B, i are the component of Hu square.

Claims (10)

1. a kind of object detecting device, wherein described device includes:
Determination unit determines the second detection zone;
First detection unit detects the second highlight regions in second detection zone;And
Filter element is filtered second highlight regions, obtains the target in second detection zone.
2. object detecting device according to claim 1, wherein the determination unit includes:
First determination unit determines the first detection zone;
Second determination unit determines the first reference zone according to first detection zone;
Second detection unit detects the first highlight regions in first reference zone;And
Updating unit is updated first detection zone using first highlight regions, obtains the second detection zone Domain.
3. object detecting device according to claim 2, wherein the second detection unit is by first reference zone Middle brightness value is greater than the pixel of first threshold as the pixel in first highlight regions, obtains first highlight regions.
4. object detecting device according to claim 2, wherein the updating unit is upper by first highlight regions What lower boundary and its extended line obtained after being intersected with the boundary of first detection zone is located at first detection zone Interior region is as second detection zone.
5. object detecting device according to claim 1, wherein the first detection unit is by second detection zone Middle brightness value is greater than the pixel of second threshold as the pixel in second highlight regions, obtains second highlight regions.
6. object detecting device according to claim 1, wherein the filter element includes:
First filter element carries out first time mistake to second highlight regions according to the shape of second highlight regions Filter, removes the second highlight regions that shape does not meet predetermined condition;
Second filter element carries out the to the second highlight regions of reservation according to the centre coordinate of the second highlight regions of reservation Secondary filter, the absolute value of the difference for removing horizontal coordinate are small less than vertical coordinate in two the second luminance areas of third threshold value Second highlight regions;And
Searching unit searches matching area pair from the second highlight regions further retained, according to matching area to obtaining Target in second detection zone.
7. object detecting device according to claim 6, wherein the predetermined condition includes any of the following or arbitrarily Combination:
Area is in the first preset range;
Circularity is in the second preset range;
Convexity is in third preset range.
8. object detecting device according to claim 6, wherein the matching area is to meeting any one of the following conditions A or any combination:
The absolute value of the difference of the vertical coordinate of two the second highlight regions is less than the 4th threshold value;
The area ratio of two the second highlight regions is in the 4th preset range;
The shape similarity of two the second highlight regions is in the 5th preset range.
9. object detecting device according to claim 8, wherein the shape similarity indicates are as follows:
Wherein, A and B is two the second highlight regions,For the Hu square of the second highlight regions A,For the Hu of the second highlight regions B Square, i are the component of Hu square.
10. a kind of image processing equipment, wherein described image processing equipment includes such as the described in any item mesh of claim 1-9 Mark detection device.
CN201810554618.0A 2017-07-14 2018-06-01 Target detection method and device and image processing equipment Active CN109255349B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2017105749890 2017-07-14
CN201710574989 2017-07-14

Publications (2)

Publication Number Publication Date
CN109255349A true CN109255349A (en) 2019-01-22
CN109255349B CN109255349B (en) 2021-11-23

Family

ID=65051963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810554618.0A Active CN109255349B (en) 2017-07-14 2018-06-01 Target detection method and device and image processing equipment

Country Status (2)

Country Link
JP (1) JP7114965B2 (en)
CN (1) CN109255349B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114648871A (en) * 2020-12-18 2022-06-21 富士通株式会社 Speed fusion method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002083301A (en) * 2000-09-06 2002-03-22 Mitsubishi Electric Corp Traffic monitoring device
US20060227133A1 (en) * 2000-03-28 2006-10-12 Michael Petrov System and method of three-dimensional image capture and modeling
CN102122344A (en) * 2011-01-07 2011-07-13 南京理工大学 Road border detection method based on infrared image
CN102567705A (en) * 2010-12-23 2012-07-11 北京邮电大学 Method for detecting and tracking night running vehicle
CN103226820A (en) * 2013-04-17 2013-07-31 南京理工大学 Improved two-dimensional maximum entropy division night vision image fusion target detection algorithm
CN104732235A (en) * 2015-03-19 2015-06-24 杭州电子科技大学 Vehicle detection method for eliminating night road reflective interference
CN105260701A (en) * 2015-09-14 2016-01-20 中电海康集团有限公司 Front vehicle detection method applied to complex scene
CN105320938A (en) * 2015-09-25 2016-02-10 安徽师范大学 Rear vehicle detection method in nighttime environment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06276524A (en) * 1993-03-19 1994-09-30 Toyota Motor Corp Device for recognizing vehicle running in opposite direction
JP4935586B2 (en) 2007-09-05 2012-05-23 株式会社デンソー Image processing apparatus, in-vehicle image processing apparatus, in-vehicle image display apparatus, and vehicle control apparatus
JP2011103070A (en) 2009-11-11 2011-05-26 Toyota Motor Corp Nighttime vehicle detector
JP2016142647A (en) 2015-02-03 2016-08-08 クラリオン株式会社 Image processing device and vehicle system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060227133A1 (en) * 2000-03-28 2006-10-12 Michael Petrov System and method of three-dimensional image capture and modeling
JP2002083301A (en) * 2000-09-06 2002-03-22 Mitsubishi Electric Corp Traffic monitoring device
CN102567705A (en) * 2010-12-23 2012-07-11 北京邮电大学 Method for detecting and tracking night running vehicle
CN102122344A (en) * 2011-01-07 2011-07-13 南京理工大学 Road border detection method based on infrared image
CN103226820A (en) * 2013-04-17 2013-07-31 南京理工大学 Improved two-dimensional maximum entropy division night vision image fusion target detection algorithm
CN104732235A (en) * 2015-03-19 2015-06-24 杭州电子科技大学 Vehicle detection method for eliminating night road reflective interference
CN105260701A (en) * 2015-09-14 2016-01-20 中电海康集团有限公司 Front vehicle detection method applied to complex scene
CN105320938A (en) * 2015-09-25 2016-02-10 安徽师范大学 Rear vehicle detection method in nighttime environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SAYANAN SIVARAMAN 等: "Looking at Vehicles on the Road: A Survey of Vision-Based Vehicle Detection, Tracking, and Behavior Analysis", 《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》 *
吴海涛 等: "复杂环境下的夜间视频车辆检测", 《计算机应用研究》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114648871A (en) * 2020-12-18 2022-06-21 富士通株式会社 Speed fusion method and device
CN114648871B (en) * 2020-12-18 2024-01-02 富士通株式会社 Speed fusion method and device

Also Published As

Publication number Publication date
JP2019021295A (en) 2019-02-07
JP7114965B2 (en) 2022-08-09
CN109255349B (en) 2021-11-23

Similar Documents

Publication Publication Date Title
US20210264176A1 (en) Hazard detection from a camera in a scene with moving shadows
Son et al. Real-time illumination invariant lane detection for lane departure warning system
Kosaka et al. Vision-based nighttime vehicle detection using CenSurE and SVM
CN108629292B (en) Curved lane line detection method and device and terminal
Fan et al. Real-time stereo vision-based lane detection system
Broggi et al. Vehicle detection for autonomous parking using a Soft-Cascade AdaBoost classifier
CN110929655B (en) Lane line identification method in driving process, terminal device and storage medium
Zakaria et al. Lane detection in autonomous vehicles: A systematic review
CN106971185A (en) A kind of license plate locating method and device based on full convolutional network
Hmida et al. Hardware implementation and validation of a traffic road sign detection and identification system
CN110991489A (en) Driving data labeling method, device and system
Salarian et al. A vision based system for traffic lights recognition
CN108052921B (en) Lane line detection method, device and terminal
CN108765456B (en) Target tracking method and system based on linear edge characteristics
Chen et al. Real-time vision-based multiple vehicle detection and tracking for nighttime traffic surveillance
Boumediene et al. Triangular traffic signs detection based on RSLD algorithm
CN110796230A (en) Method, equipment and storage medium for training and using convolutional neural network
Chen et al. Embedded vision-based nighttime driver assistance system
CN105023002A (en) Vehicle logo positioning method based on active vision
CN114202936B (en) Traffic guidance robot and control method thereof
Sultana et al. Vision-based robust lane detection and tracking in challenging conditions
CN101369312A (en) Method and equipment for detecting intersection in image
CN109255349A (en) Object detection method, device and image processing equipment
JP2010225125A (en) Passage detecting program, and passage detection device and method
CN112308801A (en) Road traffic tracking management method and system based on big data image acquisition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant