CN109255349B - Target detection method and device and image processing equipment - Google Patents

Target detection method and device and image processing equipment Download PDF

Info

Publication number
CN109255349B
CN109255349B CN201810554618.0A CN201810554618A CN109255349B CN 109255349 B CN109255349 B CN 109255349B CN 201810554618 A CN201810554618 A CN 201810554618A CN 109255349 B CN109255349 B CN 109255349B
Authority
CN
China
Prior art keywords
area
highlight
detection
region
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810554618.0A
Other languages
Chinese (zh)
Other versions
CN109255349A (en
Inventor
白向晖
杨雅文
谭志明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Publication of CN109255349A publication Critical patent/CN109255349A/en
Application granted granted Critical
Publication of CN109255349B publication Critical patent/CN109255349B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

A target detection method, a target detection device and an image processing device are provided, wherein the target detection method comprises the following steps: determining a second detection area; detecting a second highlight region in the second detection region; and filtering the second highlight area to obtain the target in the second detection area. The embodiment of the invention determines the target in the detection area (called as the second detection area) by detecting the highlight area (called as the second highlight area) in the detection area (called as the second detection area), thereby improving the accuracy of target detection under the condition of poor illumination conditions.

Description

Target detection method and device and image processing equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a target detection method and apparatus, and an image processing device.
Background
With the development of information technology, image-based object detection technology is increasingly widely used. For example, in the field of traffic monitoring, target detection may be performed on a video monitoring image, so as to identify a target such as a specific vehicle, and further implement functions such as identification, tracking, and control of the target.
It should be noted that the above background description is only for the sake of clarity and complete description of the technical solutions of the present invention and for the understanding of those skilled in the art. Such solutions are not considered to be known to the person skilled in the art merely because they have been set forth in the background section of the invention.
Disclosure of Invention
The inventor finds that when a target such as a vehicle enters a tunnel or runs at night, the accuracy of target detection is greatly reduced due to poor illumination conditions and poor sight.
In order to solve the above problem, embodiments of the present invention provide a target detection method and apparatus, and an image processing device.
According to a first aspect of the embodiments of the present invention, there is provided a target detection method, wherein the method includes:
determining a second detection area;
detecting a second highlight region in the second detection region; and
and filtering the second highlight area to obtain a target in the second detection area.
According to a second aspect of embodiments of the present invention, there is provided an object detection apparatus, wherein the apparatus comprises:
a determination unit that determines a second detection area;
a first detection unit that detects a second highlight region in the second detection region; and
and the filtering unit is used for filtering the second highlight area to obtain a target in the second detection area.
According to a third aspect of embodiments of the present invention, there is provided an image processing apparatus comprising the object detection device of the second aspect.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable program, wherein when the program is executed in an object detection apparatus or an image processing device, the program causes the object detection apparatus or the image processing device to execute the object detection method according to the first aspect of embodiments of the present invention.
According to a fifth aspect of embodiments of the present invention, there is provided a storage medium storing a computer-readable program, wherein the computer-readable program causes an object detection apparatus or an image processing device to execute the object detection method according to the first aspect of embodiments of the present invention.
The embodiment of the invention has the beneficial effects that: the embodiment of the invention determines the target in the detection area (called as the second detection area) by detecting the highlight area (called as the second highlight area) in the detection area (called as the second detection area), thereby improving the accuracy of target detection under the condition of poor illumination conditions. When the embodiment of the invention is applied to vehicle detection, the vehicle can be effectively detected by utilizing the illumination characteristics of the headlamp when the vehicle runs in places with poor illumination conditions such as a tunnel and the like, and the accuracy of target detection is improved.
Specific embodiments of the present invention are disclosed in detail with reference to the following description and drawings, indicating the manner in which the principles of the invention may be employed. It should be understood that the embodiments of the invention are not so limited in scope. The embodiments of the invention include many variations, modifications and equivalents within the scope of the terms of the appended claims.
Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments, in combination with or instead of the features of the other embodiments.
It should be emphasized that the term "comprises/comprising" when used herein, is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps or components.
Drawings
Elements and features described in one drawing or one implementation of an embodiment of the invention may be combined with elements and features shown in one or more other drawings or implementations. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views, and may be used to designate corresponding parts for use in more than one embodiment.
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 is a schematic diagram of a target detection method of example 1;
fig. 2 is a schematic view of determining a second detection region in the object detection method of embodiment 1;
fig. 3 is a schematic diagram of filtering the second highlight region in the object detection method of embodiment 1;
FIG. 4 is a schematic view of a second highlight region;
FIG. 5 is a schematic illustration of a convex hull of the second highlight region of FIG. 4;
6-10 are schematic diagrams of an implementation scenario of the object detection method of embodiment 1;
FIG. 11 is a schematic view of an object detecting apparatus of embodiment 2;
FIG. 12 is a schematic view of a determination unit of the object detection device of embodiment 2;
FIG. 13 is a schematic view of a filter unit of the object detecting device of embodiment 2;
fig. 14 is a schematic diagram of an image processing apparatus of embodiment 3.
Detailed Description
The foregoing and other features of the invention will become apparent from the following description taken in conjunction with the accompanying drawings. In the description and drawings, particular embodiments of the invention have been disclosed in detail as being indicative of some of the embodiments in which the principles of the invention may be employed, it being understood that the invention is not limited to the embodiments described, but, on the contrary, is intended to cover all modifications, variations, and equivalents falling within the scope of the appended claims.
In the embodiments of the present invention, the terms "first", "second", and the like are used for distinguishing different elements by name, but do not denote a spatial arrangement, a temporal order, or the like of the elements, and the elements should not be limited by the terms. The term "and/or" includes any and all combinations of one or more of the associated listed terms. The terms "comprising," "including," "having," and the like, refer to the presence of stated features, elements, components, and do not preclude the presence or addition of one or more other features, elements, components, and elements.
In embodiments of the invention, the singular forms "a", "an", and the like include the plural forms and are to be construed broadly as "a" or "an" and not limited to the meaning of "a" or "an"; furthermore, the term "comprising" should be understood to include both the singular and the plural, unless the context clearly dictates otherwise. Further, the term "according to" should be understood as "at least partially according to … …," and the term "based on" should be understood as "based at least partially on … …," unless the context clearly dictates otherwise.
Various embodiments of the present invention will be described below with reference to the drawings. These embodiments are merely exemplary and are not intended to limit the present invention.
Example 1
The present embodiment provides a target detection method, fig. 1 is a schematic diagram of the method, please refer to fig. 1, and the method includes:
step 101: determining a second detection area;
step 102: detecting a second highlight region within the second detection region;
step 103: and filtering the second highlight area to obtain a target in the second detection area.
In this embodiment, the detection area contains the target to be detected, the highlight area inconsistent with the target feature is removed by detecting the highlight area in the detection area, and the remaining highlight area corresponds to the target to be detected, so that the target to be detected can be obtained. In addition, the target is determined by detecting the highlight area, so that the target detection accuracy under the condition of poor illumination condition is improved.
In the present embodiment, for convenience of description, the detection region in steps 101 and 103 is referred to as "second detection region", and the highlight region in steps 101 and 103 is referred to as "second highlight region".
In this embodiment, the method of determining the detection region (second detection region) in step 101 is not limited, and in one embodiment, a region including the detection target may be set as the detection region (second detection region), and in another embodiment, the detection region (second detection region) may be determined based on the reference region.
Fig. 2 is a schematic diagram of an embodiment of step 101, and referring to fig. 2, the method includes:
step 201: determining a first detection area;
step 202: determining a first reference area according to the first detection area;
step 203: detecting a first highlight region in the first reference region; and
step 204: and updating the first detection area by using the first highlight area to obtain a second detection area.
In step 201, the first detection region may be arbitrarily set, for example, a lane region where the vehicle travels is set as the first detection region, and the method of the present embodiment specifies the second detection region from the first detection region as the detection region in step 101.
In step 202, a first reference area is determined according to a first detection area, which may be an area adjacent to the first detection area and onto which a luminous body of a target located within the first detection area can illuminate. For example, when the first detection region is a lane region in which the vehicle travels, the first reference region may be a region adjacent to the lane, and the adjacent region may be a region irradiated by headlamps of the vehicle, such as a wall of a tunnel or the like. The above is merely an example, and the first reference area may not be adjacent to the first detection area, as long as the illuminant of the target in the first detection area can illuminate the first reference area.
In the present embodiment, the second detection region may be found by detecting the first highlight region within the first reference region.
In step 203, the pixels in the first reference region having luminance values greater than the first threshold may be used as the pixels in the first highlight region, so as to obtain the first highlight region.
In step 204, a region located in the first detection region, which is obtained by intersecting the upper and lower boundaries of the first highlight region and the extension thereof with the boundary of the first detection region, may be used as the second detection region.
Thus, the range of the detection region is reduced, that is, the detection region is updated from the first detection region to the second detection region, the accuracy of the target detection is improved, and the calculation amount of the target detection is reduced by the processing of the present embodiment.
The determination method of the detection area shown in fig. 2 is only an example, but the embodiment is not limited thereto.
In this embodiment, the detection method for the highlight region in step 102 may be the same as the detection method in step 203, that is, the pixels in the second detection region with the luminance value greater than the second threshold are taken as the pixels in the second highlight region, so as to obtain the second highlight region.
In this embodiment, the first threshold and the second threshold may be the same or different.
In this embodiment, after the highlighted area (second highlighted area) in the second detection area is obtained in step 102, the second highlighted area (second highlighted area) inconsistent with the feature of the target is removed by filtering the second highlighted area in step 103, and the second highlighted area corresponding to the target is obtained, so as to determine the target.
Fig. 3 shows a schematic diagram of an embodiment of step 103, and as shown in fig. 3, the method includes:
step 301: carrying out primary filtering on the second highlight area according to the shape of the second highlight area, and removing the second highlight area of which the shape does not meet the preset condition;
step 302: performing secondary filtering on the reserved second high-brightness area according to the central coordinate of the reserved second high-brightness area, and removing the second high-brightness area with small vertical coordinate in the two second brightness areas with the absolute value of the difference of the horizontal coordinates smaller than a third threshold; and
step 303: and searching a matching area pair from the second highlight area which is further reserved, and obtaining the target in the second detection area according to the matching area pair.
In step 301, the predetermined condition includes any one or any combination of the following:
the area of the second highlight region is within a first predetermined range;
the roundness of the second highlight region is within a second predetermined range;
the convexity of the second highlight region is within a third predetermined range.
That is, when the area of a certain second highlight region is not within the first predetermined range, and/or the roundness is not within the second predetermined range, and/or the convexity is not within the third predetermined range, the second highlight region is filtered out.
In the present embodiment, the roundness is proportional to the area of the second highlight region and inversely proportional to the square of the circumference of the second highlight region, and may be defined as:
Figure GDA0003219278650000061
in this embodiment, the convexity may be defined as:
Figure GDA0003219278650000062
the solid line in fig. 4 shows a schematic diagram of a certain second highlight region, and the dotted line in fig. 5 shows the convex hull of this second highlight region.
In step 302, the center coordinates of each second highlight region retained in step 301, including the horizontal coordinate (abscissa, also referred to as x-axis) and the vertical coordinate (ordinate, also referred to as y-axis), may be calculated, and then two-by-two comparison may be performed, and for each two second highlight regions, if the absolute value of the difference between the horizontal coordinates of the two second highlight regions is smaller than a third threshold, that is, the center coordinates of the two second highlight regions are approximately consistent in the vertical direction, the second highlight region with the relatively small vertical coordinate (the lower second highlight region) is filtered out, and the second highlight region with the relatively large vertical coordinate (the upper second highlight region) is retained.
In step 303, for the second highlight region retained in step 302, a matching region pair may be searched, that is, a second highlight region meeting the condition of the matching region pair is searched, and the matching second highlight region corresponds to the target to be detected.
In the present embodiment, the matching region pair condition is any one or any combination of the following conditions, that is, the matching region pair satisfies any one or any combination of the following conditions:
the absolute value of the difference between the vertical coordinates of the two second highlight areas is smaller than a fourth threshold;
the area ratio of the two second highlight areas is within a fourth predetermined range;
the shape similarity of the two second highlight regions is within a fifth predetermined range.
In the present embodiment, the absolute value of the difference between the vertical coordinates of the two second highlight regions is smaller than the fourth threshold, that is, the center coordinates of the two second highlight regions are substantially identical in the horizontal direction, and may belong to the same target.
In the present embodiment, the area ratio of the two second highlight regions is within the fourth predetermined range, that is, the two second highlight regions have the same size and may belong to the same target. In the present embodiment, the area ratio refers to a larger area divided by a smaller area. For example, if the area of the second highlight region a is larger than the area of the second highlight region B, the area ratio means:
Figure GDA0003219278650000071
in this embodiment, the similarity of the shapes of the two second highlight regions is within a fifth predetermined range, that is, the shapes of the two second highlight regions are equivalent and may belong to the same target. In the present embodiment, the shape similarity may be defined as:
Figure GDA0003219278650000072
wherein A and B are two second highlight areas,
Figure GDA0003219278650000073
the Hu moment of the second highlight region a,
Figure GDA0003219278650000074
the Hu moment of the second highlight region B, i is a component of the Hu moment, and 7 components are total.
In the embodiment, the second highlight area detected from the second detection area is filtered twice, the second highlight area corresponding to the target is found, the target is determined, and the accuracy of target detection under the condition of poor illumination condition is improved.
In order to make the method of the present embodiment more clearly understood, the method of the present embodiment is described below with reference to an example. In this example, the detection target is a vehicle traveling in a tunnel, and the vehicle is only one. Of course, this example is merely illustrative, and the method of the present embodiment is applicable to any number of target detections in poorly illuminated locations.
Fig. 6-10 are schematic diagrams of the detection scenario of the present example.
As shown in fig. 6, the area 601 is the aforementioned first detection area, which may be predefined, for example, a unilateral lane in the tunnel, or may be a predefined other area.
As shown in fig. 7, the region 701 is the first reference region, which may be determined according to the first detection region 601 or may be predefined, and the first reference region 701 is adjacent to the first detection region 601, and a detection target, that is, a headlight of a vehicle may be irradiated to the first reference region 701.
As shown in fig. 8, the region 801 is a first highlight region detected from the first reference region 701, and can be obtained through the foregoing step 203, that is, for a pixel located in the first reference region 701, if the brightness of the pixel is greater than a first threshold, the pixel is considered to be located in the first highlight region, and thus the first highlight region is obtained.
As shown in fig. 9, a line segment 901 is an extension of an upper boundary of the first highlight region 801, a line segment 902 is an extension of a lower boundary of the first highlight region 801, and a region 903 where the first highlight region 801, the extensions 901, 902 and the first detection region 601 intersect is a second detection region.
The determination manner of the second detection region 903 shown in fig. 6 to 9 is only an example, and as described above, the second detection region 903 may also be determined according to other strategies or methods, and the second detection region 903 contains the object to be detected.
As shown in fig. 10, the second highlight areas 1001, 1002, 1003, and 1004 are detected in the second detection area 903 in the same manner as the first highlight area 801 is detected in the first reference area 701, and the detection manner is as described above and will not be described again here.
As shown in fig. 10, after the second highlight areas 1001, 1002, 1003, and 1004 are detected, the second highlight areas 1001, 1002, 1003, and 1004 are subjected to the above-described two-time filtering.
In the first filtering, the second highlight region in which the area of the region is not within the first predetermined range, the circularity of the region is not within the second predetermined range, and the convexity of the region is not within the third predetermined range is removed, and in the present embodiment, in the first filtering, all the second highlight regions 1001, 1002, 1003, and 1004 are retained, that is, all the second highlight regions 1001, 1002, 1003, and 1004 satisfy the predetermined conditions.
In the second filtering, the center coordinates of the second highlighted areas 1001, 1002, 1003, 1004 are calculated, and 1003 is removed because the x-coordinates of the second highlighted areas 1001 and 1003 are equivalent, and 1004 is removed because the x-coordinates of the second highlighted areas 1002 and 1004 are equivalent. Thus, by the second filtering, the second highlight regions 1001 and 1002 remain.
Finally, a matching area pair satisfying the matching area condition is searched from the reserved second highlight areas 1001 and 1002, and since the second highlight areas 1001 and 1002 satisfy the matching area condition, only one pair of second highlight areas 1001 and 1002 is detected by the method of the present embodiment, and the pair of second highlight areas 1001 and 1002 corresponds to one target, for example, two headlamps of the vehicle in the present embodiment. Therefore, the method of the embodiment can detect the target under the condition of poor illumination condition, and improves the accuracy of target detection.
One embodiment of the method of this embodiment is described in detail above with reference to fig. 6 to 10, but as described above, some steps are optional, and some steps may be replaced by other means, which is specifically described above and will not be described herein again.
In this embodiment, values of the first to fourth thresholds and values of the first to fifth predetermined ranges are not limited, and may be determined according to empirical values or may be determined by other means, which is not described herein again.
The method of the embodiment determines the target in the detection area (referred to as the second detection area) by detecting the highlight area (referred to as the second highlight area) in the detection area (referred to as the second detection area), so that the accuracy of target detection in the case of poor lighting conditions is improved. When the method of the embodiment is applied to vehicle detection, the vehicle can be effectively detected by utilizing the lighting characteristics of the headlamp when the vehicle runs in places with poor lighting conditions such as a tunnel, and the like, so that the accuracy of target detection is improved.
Example 2
The present embodiment provides an object detection device, and since the principle of solving the problem of the device is similar to the method of embodiment 1, the specific implementation thereof can refer to the implementation of the method of embodiment 1, and the description of the same contents will not be repeated.
Fig. 11 is a schematic diagram of the object detection apparatus 1100 of the present embodiment, and as shown in fig. 11, the object detection apparatus 1100 includes: a determination unit 1101, a first detection unit 1102, and a filtering unit 1103. The determining unit 1101 is configured to determine a second detection area, the first detecting unit 1102 is configured to detect a second highlight area in the second detection area, and the filtering unit 1103 is configured to filter the second highlight area to obtain an object in the second detection area. The specific implementation can refer to each step in fig. 1, and details are not described here.
In one embodiment of this embodiment, as shown in fig. 12, the determining unit 1101 may include: a first determination unit 1201, a second determination unit 1202, a second detection unit 1203, and an update unit 1204. The first determination unit 1201 may determine a first detection area; the second determination unit 1202 may determine a first reference area from the first detection area; the second detecting unit 1203 may detect a first highlight region in the first reference region; the updating unit 1204 may update the first detection area with the first highlight area to obtain a second detection area. The specific implementation thereof can refer to the steps in fig. 2, and is not described herein again.
In this embodiment, second detecting section 1203 may obtain the first highlight region by using, as pixels in the first highlight region, pixels in the first reference region whose luminance value is greater than a first threshold value.
In this embodiment, updating section 1204 may set, as the second detection region, a region located within the first detection region, where the upper and lower boundaries of the first highlight region and the extension thereof intersect with the boundary of the first detection region.
In this embodiment, the first detection unit 1102 may also use a pixel in the second detection area whose luminance value is greater than the second threshold as a pixel in the second highlight area, so as to obtain the second highlight area.
In one embodiment of this embodiment, as shown in fig. 13, the filtering unit 1103 may include: a first filtering unit 1301, a second filtering unit 1302, and a lookup unit 1303. The first filtering unit 1301 may perform first filtering on the second highlight region according to the shape of the second highlight region, and remove the second highlight region whose shape does not meet a predetermined condition; the second filtering unit may perform second filtering on the reserved second highlight region according to the center coordinate of the reserved second highlight region, and remove a second highlight region with a small vertical coordinate of two second brightness regions whose difference in horizontal coordinates is smaller than a third threshold; the searching unit may search for a matching area pair from the second highlight area further retained, and obtain the target in the second detection area according to the matching area pair. The specific implementation thereof can refer to the steps in fig. 3, which are not described herein again.
In this embodiment, the predetermined condition includes any one or any combination of the following:
the area is within a first predetermined range;
the roundness is within a second predetermined range;
the convexity is within a third predetermined range.
In the present embodiment, the roundness is proportional to the area of the second highlight region and inversely proportional to the square of the circumference of the second highlight region.
In the present embodiment, the convexity refers to a ratio of an area of the second highlight region to an area of a convex hull of the second highlight region.
In the present embodiment, the pair of matching regions satisfies any one or any combination of the following conditions:
the absolute value of the difference between the vertical coordinates of the two second highlight areas is smaller than a fourth threshold;
the area ratio of the two second highlight areas is within a fourth predetermined range;
the shape similarity of the two second highlight regions is within a fifth predetermined range.
In the present embodiment, the shape similarity may be expressed as:
Figure GDA0003219278650000101
wherein A and B are two second highlight areas,
Figure GDA0003219278650000102
the Hu moment of the second highlight region a,
Figure GDA0003219278650000103
is the Hu moment of the second highlight region B, i is a component of the Hu moment.
The device of the embodiment determines the target in the detection area (called as the second detection area) by detecting the highlight area (called as the second highlight area) in the detection area (called as the second detection area), thereby improving the accuracy of target detection under poor illumination conditions. When the device of the embodiment is applied to vehicle detection, the vehicle can be effectively detected by utilizing the lighting characteristics of the headlamp when the vehicle runs in places with poor lighting conditions such as a tunnel, and the accuracy of target detection is improved.
Example 3
The present embodiment provides an image processing apparatus including the object detection device as described in embodiment 2.
Fig. 14 is a schematic diagram of the image processing apparatus of the present embodiment. As shown in fig. 14, the image processing apparatus 1400 may include: a Central Processing Unit (CPU)1401 and a memory 1402; the memory 1402 is coupled to the central processor 1401. Wherein the memory 1402 can store various data; further, a program for information processing is stored, and executed under the control of the central processor 1401.
In one embodiment, the functionality of the object detection apparatus 1100 may be integrated into the central processor 1401. The central processor 1401 may be configured to implement the target detection method according to embodiment 1.
In another embodiment, the object detection apparatus 1100 may be configured separately from the central processor 1401, for example, the object detection apparatus may be configured as a chip connected to the central processor 1401, and the function of the object detection apparatus is realized by the control of the central processor 1401.
In the present embodiment, the central processor 1401 may be configured to perform control as follows: determining a second detection area; detecting a second highlight region in the second detection region; and filtering the second highlight area to obtain the target in the second detection area.
Further, as shown in fig. 14, the image processing apparatus 1400 may further include: input output (I/O) devices 1403 and a display 1404; the functions of the above components are similar to those of the prior art, and are not described in detail here. It is to be noted that the image processing apparatus 1400 does not necessarily include all the components shown in fig. 14; further, the image processing apparatus 1400 may further include components not shown in fig. 14, which can refer to the related art.
An embodiment of the present invention provides a computer-readable program, wherein when the program is executed in an object detection apparatus or an image processing device, the program causes the object detection apparatus or the image processing device to execute an object detection method as described in embodiment 1.
An embodiment of the present invention provides a storage medium storing a computer-readable program, where the computer-readable program causes an object detection apparatus or an image processing device to execute the object detection method according to embodiment 1.
The above devices and methods of the present invention can be implemented by hardware, or can be implemented by hardware and software. The present invention relates to a computer-readable program which, when executed by a logic section, enables the logic section to realize the above-described apparatus or constituent section, or to realize the above-described various methods or steps. The present invention also relates to a storage medium such as a hard disk, a magnetic disk, an optical disk, a DVD, a flash memory, or the like, for storing the above program.
The methods/apparatus described in connection with the embodiments of the invention may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. For example, one or more of the functional block diagrams and/or one or more combinations of the functional block diagrams (e.g., the determining unit, the first detecting unit, the filtering unit, etc.) shown in fig. 11 may correspond to each software module of the computer program flow or each hardware module. These software modules may correspond to the steps shown in fig. 1, respectively. These hardware modules may be implemented, for example, by solidifying these software modules using a Field Programmable Gate Array (FPGA).
A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium; or the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The software module may be stored in the memory of the mobile terminal or in a memory card that is insertable into the mobile terminal. For example, if the device (e.g., mobile terminal) employs a relatively large capacity MEGA-SIM card or a large capacity flash memory device, the software module may be stored in the MEGA-SIM card or the large capacity flash memory device.
One or more of the functional blocks and/or one or more combinations of the functional blocks described in the figures can be implemented as a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any suitable combination thereof designed to perform the functions described herein. One or more of the functional blocks and/or one or more combinations of the functional blocks described in connection with the figures may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP communication, or any other such configuration.
While the invention has been described with reference to specific embodiments, it will be apparent to those skilled in the art that these descriptions are illustrative and not intended to limit the scope of the invention. Various modifications and alterations of this invention will become apparent to those skilled in the art based upon the spirit and principles of this invention, and such modifications and alterations are also within the scope of this invention.
With respect to the embodiments including the above embodiments, the following remarks are also disclosed:
supplementary note 1, a target detection method, wherein the method comprises:
determining a second detection area;
detecting a second highlight region in the second detection region; and
and filtering the second highlight area to obtain a target in the second detection area.
Supplementary note 2, the object detection method according to supplementary note 1, wherein the determining the second detection area includes:
determining a first detection area;
determining a first reference area according to the first detection area;
detecting a first highlight region in the first reference region; and
and updating the first detection area by using the first highlight area to obtain a second detection area.
Note 3 of the present invention, the object detection method according to note 2, wherein detecting the first highlight region in the first reference region includes:
and taking the pixels with the brightness values larger than a first threshold value in the first reference area as the pixels in the first highlight area to obtain the first highlight area.
Supplementary note 4, the object detection method according to supplementary note 2, wherein, updating the first detection area with the first highlight area to obtain a second detection area, comprises:
and intersecting the upper and lower boundaries of the first highlight area and the extension line thereof with the boundary of the first detection area to obtain the second detection area positioned in the first detection area.
Note 5, the object detection method according to note 1, wherein detecting a second highlight region in the second detection region includes:
and taking the pixels with the brightness values larger than a second threshold value in the second detection area as the pixels in the second highlight area to obtain the second highlight area.
Supplementary note 6, the target detection method according to supplementary note 1, wherein the filtering the second highlight region to obtain the target in the second detection region, comprises:
carrying out primary filtering on the second highlight area according to the shape of the second highlight area, and removing the second highlight area of which the shape does not meet the preset condition;
performing secondary filtering on the reserved second high-brightness area according to the central coordinate of the reserved second high-brightness area, and removing the second high-brightness area with small vertical coordinate in the two second brightness areas with the absolute value of the difference of the horizontal coordinates smaller than a third threshold; and
and searching a matching area pair from the second highlight area which is further reserved, and obtaining the target in the second detection area according to the matching area pair.
Supplementary note 7, the object detection method according to supplementary note 6, wherein the predetermined condition includes any one or any combination of:
the area is within a first predetermined range;
the roundness is within a second predetermined range;
the convexity is within a third predetermined range.
Note 8 that the object detection method according to note 7 is characterized in that the circularity is proportional to the area of the second highlight region and inversely proportional to the square of the perimeter of the second highlight region.
Note 9 that the convexity is an area of the second highlight region or an area of a convex hull of the second highlight region in the target detection method according to note 7.
Reference 10 and the object detection method according to reference 6, wherein the pair of matching regions satisfies any one or any combination of the following conditions:
the difference between the central coordinates of the two second highlight areas is smaller than a fourth threshold;
the area ratio of the two second highlight areas is within a fourth predetermined range;
the shape similarity of the two second highlight areas is within a fifth preset range;
supplementary note 11, the object detection method according to supplementary note 10, wherein the shape similarity is expressed as:
Figure GDA0003219278650000141
wherein A and B are two second highlight areas,
Figure GDA0003219278650000142
the Hu moment of the second highlight region a,
Figure GDA0003219278650000143
is the Hu moment of the second highlight region B, i is a component of the Hu moment.

Claims (8)

1. An object detection apparatus, wherein the apparatus comprises:
a determination unit that determines a second detection area;
a first detection unit that detects a second highlight region in the second detection region; and
a filtering unit for filtering the second highlight area to obtain a target in the second detection area,
wherein the determination unit includes:
a first determination unit that determines a first detection area;
a second determination unit that determines a first reference area from the first detection area;
a second detection unit that detects a first highlight region in the first reference region; and
an updating unit that updates the first detection area with the first highlight area to obtain a second detection area;
the first reference area is an area which can be irradiated by a target illuminant in the first detection area, and the updating unit uses an area which is located in the first detection area and is obtained by intersecting the upper and lower boundaries of the first highlight area and the extension line thereof with the boundary of the first detection area as the second detection area.
2. The object detection device according to claim 1, wherein the second detection unit takes a pixel in the first reference region having a luminance value larger than a first threshold value as a pixel within the first highlight region, resulting in the first highlight region.
3. The object detection device according to claim 1, wherein the first detection unit takes a pixel in the second detection area whose luminance value is larger than a second threshold value as a pixel within the second highlight area, resulting in the second highlight area.
4. The object detection device of claim 1, wherein the filtering unit comprises:
the first filtering unit is used for filtering the second highlight area for the first time according to the shape of the second highlight area and removing the second highlight area with the shape not meeting the preset condition;
a second filtering unit which performs a second filtering on the reserved second highlight area according to the central coordinate of the reserved second highlight area, and removes a second highlight area with a small vertical coordinate in two second highlight areas with a difference of horizontal coordinates smaller than a third threshold; and
and the searching unit is used for searching a matching area pair from the second highlight area which is further reserved, and obtaining the target in the second detection area according to the matching area pair.
5. The object detection device of claim 4, wherein the predetermined condition comprises any one or any combination of:
the area is within a first predetermined range;
the roundness is within a second predetermined range;
the convexity is within a third predetermined range.
6. The object detection device of claim 4, wherein the pair of matching regions satisfies any one or any combination of the following conditions:
the absolute value of the difference between the vertical coordinates of the two second highlight areas is smaller than a fourth threshold;
the area ratio of the two second highlight areas is within a fourth predetermined range;
the shape similarity of the two second highlight regions is within a fifth predetermined range.
7. The object detection device of claim 6, wherein the shape similarity is expressed as:
Figure FDA0003219278640000021
wherein A and B are two second highlight areas,
Figure FDA0003219278640000022
the Hu moment of the second highlight region a,
Figure FDA0003219278640000023
is the Hu moment of the second highlight region B, i is a component of the Hu moment.
8. An image processing apparatus, wherein the image processing apparatus comprises the object detection device according to any one of claims 1 to 7.
CN201810554618.0A 2017-07-14 2018-06-01 Target detection method and device and image processing equipment Active CN109255349B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2017105749890 2017-07-14
CN201710574989 2017-07-14

Publications (2)

Publication Number Publication Date
CN109255349A CN109255349A (en) 2019-01-22
CN109255349B true CN109255349B (en) 2021-11-23

Family

ID=65051963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810554618.0A Active CN109255349B (en) 2017-07-14 2018-06-01 Target detection method and device and image processing equipment

Country Status (2)

Country Link
JP (1) JP7114965B2 (en)
CN (1) CN109255349B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114648871B (en) * 2020-12-18 2024-01-02 富士通株式会社 Speed fusion method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002083301A (en) * 2000-09-06 2002-03-22 Mitsubishi Electric Corp Traffic monitoring device
CN102122344A (en) * 2011-01-07 2011-07-13 南京理工大学 Road border detection method based on infrared image
CN102567705A (en) * 2010-12-23 2012-07-11 北京邮电大学 Method for detecting and tracking night running vehicle
CN103226820A (en) * 2013-04-17 2013-07-31 南京理工大学 Improved two-dimensional maximum entropy division night vision image fusion target detection algorithm
CN104732235A (en) * 2015-03-19 2015-06-24 杭州电子科技大学 Vehicle detection method for eliminating night road reflective interference
CN105260701A (en) * 2015-09-14 2016-01-20 中电海康集团有限公司 Front vehicle detection method applied to complex scene
CN105320938A (en) * 2015-09-25 2016-02-10 安徽师范大学 Rear vehicle detection method in nighttime environment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06276524A (en) * 1993-03-19 1994-09-30 Toyota Motor Corp Device for recognizing vehicle running in opposite direction
US7065242B2 (en) * 2000-03-28 2006-06-20 Viewpoint Corporation System and method of three-dimensional image capture and modeling
JP4935586B2 (en) 2007-09-05 2012-05-23 株式会社デンソー Image processing apparatus, in-vehicle image processing apparatus, in-vehicle image display apparatus, and vehicle control apparatus
JP2011103070A (en) 2009-11-11 2011-05-26 Toyota Motor Corp Nighttime vehicle detector
JP2016142647A (en) 2015-02-03 2016-08-08 クラリオン株式会社 Image processing device and vehicle system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002083301A (en) * 2000-09-06 2002-03-22 Mitsubishi Electric Corp Traffic monitoring device
CN102567705A (en) * 2010-12-23 2012-07-11 北京邮电大学 Method for detecting and tracking night running vehicle
CN102122344A (en) * 2011-01-07 2011-07-13 南京理工大学 Road border detection method based on infrared image
CN103226820A (en) * 2013-04-17 2013-07-31 南京理工大学 Improved two-dimensional maximum entropy division night vision image fusion target detection algorithm
CN104732235A (en) * 2015-03-19 2015-06-24 杭州电子科技大学 Vehicle detection method for eliminating night road reflective interference
CN105260701A (en) * 2015-09-14 2016-01-20 中电海康集团有限公司 Front vehicle detection method applied to complex scene
CN105320938A (en) * 2015-09-25 2016-02-10 安徽师范大学 Rear vehicle detection method in nighttime environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Looking at Vehicles on the Road: A Survey of Vision-Based Vehicle Detection, Tracking, and Behavior Analysis;Sayanan Sivaraman 等;《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》;20131231;全文 *
复杂环境下的夜间视频车辆检测;吴海涛 等;《计算机应用研究》;20071231;全文 *

Also Published As

Publication number Publication date
JP2019021295A (en) 2019-02-07
JP7114965B2 (en) 2022-08-09
CN109255349A (en) 2019-01-22

Similar Documents

Publication Publication Date Title
Wu et al. Lane-mark extraction for automobiles under complex conditions
JP6348758B2 (en) Image processing device
US20150278615A1 (en) Vehicle exterior environment recognition device
CN111612781A (en) Screen defect detection method and device and head-mounted display equipment
US10339396B2 (en) Vehicle accessibility determination device
KR101432440B1 (en) Fire smoke detection method and apparatus
CN109816621B (en) Abnormal light spot detection device and method and electronic equipment
TWI609807B (en) Image evaluation method and electronic apparatus thereof
Choi et al. Crosswalk and traffic light detection via integral framework
WO2023029467A1 (en) Method and apparatus for vehicle light detection, electronic device, and storage medium
Salarian et al. A vision based system for traffic lights recognition
CN109102026B (en) Vehicle image detection method, device and system
US20150379334A1 (en) Object recognition apparatus
CN109996377B (en) Street lamp control method and device and electronic equipment
CN108052921B (en) Lane line detection method, device and terminal
JP2019020956A (en) Vehicle surroundings recognition device
CN107992810B (en) Vehicle identification method and device, electronic equipment and storage medium
CN109255349B (en) Target detection method and device and image processing equipment
JP2017004295A (en) Traffic light recognition apparatus and traffic light recognition method
US20140226908A1 (en) Object detection apparatus, object detection method, storage medium, and integrated circuit
KR20120098292A (en) Method for detecting traffic lane
KR101402089B1 (en) Apparatus and Method for Obstacle Detection
JP7200893B2 (en) Attached matter detection device and attached matter detection method
JP2016110373A (en) Curve mirror detection device
Li et al. Rear lamp based vehicle detection and tracking for complex traffic conditions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant