CN116071594A - Target detection method, device, computer equipment and storage medium - Google Patents

Target detection method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116071594A
CN116071594A CN202310139781.1A CN202310139781A CN116071594A CN 116071594 A CN116071594 A CN 116071594A CN 202310139781 A CN202310139781 A CN 202310139781A CN 116071594 A CN116071594 A CN 116071594A
Authority
CN
China
Prior art keywords
area
target
target area
same
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310139781.1A
Other languages
Chinese (zh)
Inventor
张振林
孙超
鞠园
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Innovation Corp
Original Assignee
China Automotive Innovation Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Innovation Corp filed Critical China Automotive Innovation Corp
Priority to CN202310139781.1A priority Critical patent/CN116071594A/en
Publication of CN116071594A publication Critical patent/CN116071594A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a target detection method, a target detection device, computer equipment and a storage medium. The method comprises the following steps: acquiring a local area of each target object in an image to be detected and an integral area of at least one target object in each target object; determining a first target area matched with the upper whole area and a second target area not matched with the upper whole area from each local area; determining the distance between the same second target area and each first target area; and determining the whole area of the same second target area according to the distance between the same second target area and each first target area. According to the method, according to the first target areas which are matched with the upper whole area, the whole area of the same second target area can be determined by calculating the distance between the same second target area and each first target area, the shielded whole area can be determined, and the problem that the target object is easy to miss under the shielded condition is solved.

Description

Target detection method, device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of visual inspection technologies, and in particular, to a target detection method and apparatus, a computer device, and a storage medium.
Background
The target detection is an important component in the automatic driving visual perception module, and the accurate detection target plays a non-negligible role in prediction and path planning.
Currently, when a visual perception module is used for detecting a target, a single detector of a target detection algorithm (You Only Look Once, YOLO) series is mainly used for detection, and when the traffic environment is not complex, for example, the flow of a target is small, for example, vehicles and pedestrians are small, the detector can often obtain a good effect; however, in the case of a peak of the target traffic, such as going up and down, the condition of missing the target is liable to occur due to the blocking between the targets or the blocking of the target by the vehicle.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a target detection method, apparatus, computer device, and storage medium capable of reducing the omission factor of a target object.
In a first aspect, the present application provides a method of target detection. The method comprises the following steps:
acquiring a local area of each target object in an image to be detected and an integral area of at least one target object in each target object;
determining a first target area matched with the upper whole area and a second target area not matched with the upper whole area from each local area;
Determining the distance between the same second target area and each first target area;
and determining the whole area of the same second target area according to the distance between the same second target area and each first target area.
In one embodiment, determining the entire area of the same second target area according to the distance between the same second target area and each first target area includes:
determining a third target area from the first target areas according to the distance between the same second target area and the first target areas;
and determining the whole area of the same second target area according to the whole area corresponding to the third target area and the same second target area.
In one embodiment, determining the third target area from the first target areas according to the distance between the same second target area and the first target areas comprises:
determining a minimum distance from the distances;
and if the minimum distance is smaller than the preset distance threshold value, taking the first target area corresponding to the minimum distance as a third target area.
In one embodiment, determining the whole area of the same second target area according to the whole area corresponding to the third target area and the same second target area includes:
According to a preset adjustment rule, adjusting the coordinates of the same second target area to obtain an adjusted second target area;
and determining the whole area of the same second target area according to the whole area corresponding to the third target area and the adjusted second target area.
In one embodiment, determining the whole area of the same second target area according to the whole area corresponding to the third target area and the adjusted second target area includes:
determining a reference coordinate according to each vertex coordinate of the adjusted second target area;
and determining the whole area of the same second target area according to the reference coordinates and the size information of the whole area corresponding to the third target area.
In one embodiment, determining the whole area of the same second target area according to the reference coordinates and the size information of the whole area corresponding to the third target area includes:
determining a new abscissa according to the abscissa of the reference coordinate and the width of the whole area corresponding to the third target area;
determining a new ordinate according to the ordinate of the reference coordinate and the height of the integral region corresponding to the third target region;
and determining the whole area of the same second target area according to the new abscissa, the new ordinate and the reference coordinate.
In one embodiment, determining a first target region that matches an upper global region and a second target region that does not match an upper global region from each local region includes:
determining the intersection ratio of each local area and the whole area of at least one target object;
the local area corresponding to the cross ratio greater than or equal to a preset cross ratio threshold is used as a first target area of the matched upper whole area;
and taking the local area corresponding to the cross ratio smaller than the preset cross ratio threshold as a second target area of the unmatched upper whole area.
In a second aspect, the present application also provides an object detection apparatus. The device comprises:
the acquisition module is used for acquiring a local area of each object in the image to be detected and an integral area of at least one object in each object;
the matching module is used for determining a first target area matched with the upper whole area and a second target area not matched with the upper whole area from the local areas;
the analysis module is used for determining the distance between the same second target area and each first target area;
and the determining module is used for determining the whole area of the same second target area according to the distance between the same second target area and each first target area.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a local area of each target object in an image to be detected and an integral area of at least one target object in each target object;
determining a first target area matched with the upper whole area and a second target area not matched with the upper whole area from each local area;
determining the distance between the same second target area and each first target area;
and determining the whole area of the same second target area according to the distance between the same second target area and each first target area.
In a fourth aspect, the present application also provides a computer-readable storage medium. A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a local area of each target object in an image to be detected and an integral area of at least one target object in each target object;
determining a first target area matched with the upper whole area and a second target area not matched with the upper whole area from each local area;
Determining the distance between the same second target area and each first target area;
and determining the whole area of the same second target area according to the distance between the same second target area and each first target area.
In a fifth aspect, the present application also provides a computer program product. Computer program product comprising a computer program which, when executed by a processor, realizes the steps of:
acquiring a local area of each target object in an image to be detected and an integral area of at least one target object in each target object;
determining a first target area matched with the upper whole area and a second target area not matched with the upper whole area from each local area;
determining the distance between the same second target area and each first target area;
and determining the whole area of the same second target area according to the distance between the same second target area and each first target area.
The target detection method, the target detection device, the computer equipment and the storage medium are characterized in that local areas of all targets in an image to be detected and the whole area of at least one target in all targets are obtained, and then a first target area matched with the whole area and a second target area not matched with the whole area are determined from all local areas; and then calculating the distance between the same second target area and each first target area, and further determining the whole area of the same second target area according to the distance between the same second target area and each first target area. Because the second target area refers to a local area which is not matched with the upper integral area, namely, the integral area of the target object corresponding to the second target area is blocked, the local area cannot be matched with the corresponding integral area.
Drawings
FIG. 1 is a diagram of an application environment for a target detection method in one embodiment;
FIG. 2 is a flow chart of a method of detecting targets in one embodiment;
FIG. 3 is a flow chart of a third target area determination method according to an embodiment;
FIG. 4 is a flow diagram of a method for determining an overall region of a second target region in one embodiment;
FIG. 5 is a flow chart of a method for determining an overall region of a second target region according to another embodiment;
FIG. 6 is a flow chart of a method for determining an overall region of a second target region in yet another embodiment;
FIG. 7 is a flow chart of a method of determining a target area in one embodiment;
FIG. 8 is a flow chart of a method for detecting targets according to another embodiment;
FIG. 9 is a block diagram of an object detection device in one embodiment;
FIG. 10 is a block diagram of the determination module in one embodiment;
FIG. 11 is a block diagram of the distance analysis sub-module in one embodiment;
FIG. 12 is a block diagram of the overall region determination submodule in one embodiment;
FIG. 13 is a block diagram of the matching module in one embodiment;
fig. 14 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The target detection method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the camera 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. Specifically, the camera 102 collects an image to be detected, and transmits the image to be detected to the server 104 through a communication network, and the server 104 obtains a local area of each target object in the image to be detected and an integral area of at least one target object in each target object; determining a first target area matched with the upper whole area and a second target area not matched with the upper whole area from each local area; determining the distance between the same second target area and each first target area; and determining the whole area of the same second target area according to the distance between the same second target area and each first target area. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a target detection method is provided, and the method is applied to the server in fig. 1, and as shown in fig. 2, the method includes the following steps:
s201, acquiring a local area of each object in the image to be detected and an integral area of at least one object in each object.
The image to be detected is an image which is acquired by the server and is not subjected to target object detection; the local region refers to a partial region of the object, such as an upper body region or a face region of a human body; the whole area refers to the whole area of the object, which may be the whole and outwards extending area of the object, and may be represented by a rectangular frame, and the whole area is the whole area of the human body, for example.
In this embodiment, an optional implementation manner is: inputting the image to be detected into a trained neural network model, analyzing and processing the neural network model, and outputting a local area and a whole area of a target object in the image to be detected.
In another embodiment, the image to be detected is identified through an image identification algorithm, and a local area and a whole area of the target object in the image to be detected are finally obtained.
S202, determining a first target area matched with the upper whole area and a second target area not matched with the upper whole area from the local areas.
The first target area refers to a local area which can be matched with the whole area in the local area, and can be represented by a rectangular box, for example, a face area which can be matched with the whole area, and the second target area refers to a local area which cannot be matched with the whole area, and can be represented by a rectangular box, for example, a face area which cannot be successfully matched with the whole area.
An alternative embodiment is: the obtained local area and the obtained whole area are input into a trained neural network model, the neural network model is used for analyzing and processing, and a first target area which can be matched with the whole area and a second target area which cannot be matched with the whole area are output.
In another embodiment, the image to be detected is identified by an image identification algorithm, and the target area in the local area is identified, so that the target area which can be successfully matched with the whole area is a first target area, and the target area which can not be successfully matched with the whole area is a second target area.
S203, determining the distance between the same second target area and each first target area.
An alternative embodiment is: and calculating Euclidean distance between the same second target area and each first target area, namely the distance between the same second target area and each first target area.
Another embodiment is: and calculating the distance between the center coordinates of the second target areas and the center coordinates of the first target areas, namely the distance between the same second target area and each first target area.
S204, determining the whole area of the same second target area according to the distance between the same second target area and each first target area.
According to the distance between the same second target area and each first target area, the distance between the same second target area and each first target area can be determined, if the distance is smaller than the set threshold value, the distance between the whole area corresponding to the first target area and the whole area corresponding to the same second target area is relatively short, and at the moment, the whole area of the same second target area can be determined according to the whole area corresponding to the same second target area and the whole area corresponding to the first target area.
According to the method provided by the embodiment of the application, the local area of each object in the image to be detected and the integral area of at least one object in each object are obtained, then the first object area matched with the integral area and the second object area not matched with the integral area are determined from each local area, the distance between the same second object area and each first object area is calculated, and then the integral area of the same second object area is determined according to the distance between the same second object area and each first object area. Because the second target area refers to a local area which is not matched with the upper integral area, namely, because the integral area of the target object corresponding to the second target area is blocked, the local area cannot be matched with the corresponding integral area.
In one embodiment, as shown in fig. 3, in S204, determining the whole area of the same second target area according to the distance between the same second target area and each first target area, further includes:
s301, determining a third target area from the first target areas according to the distance between the same second target area and the first target areas.
The third target area is an area selected from the first target areas. The third target area may be a first target area corresponding to any one of a plurality of distances smaller than a preset distance threshold, or may be a first target area corresponding to a minimum distance of the plurality of distances smaller than the preset distance threshold.
In this embodiment, an optional implementation manner is: and calculating the distance between the same second target area and each first target area, determining the minimum distance from the distances, and taking the first target area corresponding to the minimum distance as a third target area if the minimum distance is smaller than a preset distance threshold value.
If the minimum distance is greater than or equal to the preset distance threshold, discarding the second target area.
S302, determining the whole area of the same second target area according to the whole area corresponding to the third target area and the same second target area.
Alternatively, the position of the whole area of the same second target area may be determined by the same second target area, the approximate size of the same second target area may be determined by the whole area of the third target area, and the position and the approximate size may be combined to determine the whole area of the same second target area.
According to the method and the device, the distance between the same second target area and each first target area is calculated, the first target area with the shortest distance with the same second target area is determined, the third target area is determined according to the first target area, the distance between the whole area corresponding to the third target area and the whole area corresponding to the same second target area in an image is shortest, and the sizes of the whole areas are the same, so that the whole area of the same second target area can be determined through the whole area corresponding to the third target area and the same second target area.
In one embodiment, as shown in fig. 4, determining, according to the integral area corresponding to the third target area and the same second target area, the integral area of the same second target area includes:
s401, adjusting the coordinates of the same second target area according to a preset adjustment rule to obtain an adjusted second target area.
The preset adjustment rule is a preset adjustment rule, and is used for adjusting the coordinates of the same second target area, for example, expanding or shrinking.
An alternative embodiment: by moving at least one vertex coordinate of the same second target area to the outside, expansion adjustment of the same second target area can be achieved, for example, moving the vertex coordinate of the upper right corner to the upper right, and expansion of the same second target area to the upper right can be achieved.
Another embodiment: by moving at least one vertex coordinate of the same second target area inward, the same second target area can be reduced and adjusted, for example, by moving the vertex coordinate of the upper right corner to the lower left, the same second target area can be reduced and adjusted to the lower left.
Yet another embodiment:
taking the same second target area as a rectangular area as an example, shifting the abscissa of the top left corner of the same second target area to the left by a first distance, shifting the ordinate of the top right corner of the same second target area to the right by a second distance, shifting the abscissa of the top right corner of the same second target area to the right by the first distance, and shifting the ordinate of the top right corner of the same second target area to the second distance, wherein the two vertexes are the vertexes of the top left corner and the bottom right corner of the adjusted second target area respectively;
Preferably, the first distance is one eighth the width of the same second target area; the second distance is one eighth the height of the same second target area.
S402, determining the whole area of the same second target area according to the whole area corresponding to the third target area and the adjusted second target area.
Optionally, the adjusted second target area is combined with the overall size information of the third target area to determine the overall area of the same second target area.
The second target area is adjusted through the preset adjustment rule, and the adjusted second target area is combined with the whole area of the third target area, so that the obtained whole area of the same second target area is more accurate.
On the basis of the above embodiment, as shown in fig. 5, determining, according to the integral area corresponding to the third target area and the adjusted second target area, the integral area of the same second target area includes:
s501, determining reference coordinates according to the adjusted vertex coordinates of the second target area.
The vertex coordinates are coordinates of four corner points of the second target area, and the reference coordinates can be coordinates of one vertex or center coordinates of an area surrounded by the four vertices.
S502, determining the whole area of the same second target area according to the reference coordinates and the size information of the whole area corresponding to the third target area.
The position and the width information of the whole area of the same second target area can be determined by combining the abscissa information of the reference coordinates with the width information of the whole area corresponding to the third target area, and the position and the length information of the whole area of the same second target area can be determined by combining the ordinate information of the reference coordinates with the length information of the whole area corresponding to the third target area, so that the whole area of the same second target area is determined.
On the basis of the above embodiment, as shown in fig. 6, determining the entire area of the same second target area according to the reference coordinates and the size information of the entire area corresponding to the third target area includes:
s601, determining a new abscissa according to the abscissa of the reference coordinate and the width of the whole area corresponding to the third target area;
optionally, the vertex of the left upper corner of the adjusted second target area is selected as the reference coordinate, and then the abscissa of the reference coordinate is added with the width of the whole area corresponding to the third target area, so that a new abscissa can be obtained, and the abscissa is the abscissa of the vertex coordinates of the right upper corner of the whole area of the same second target area.
S602, determining a new ordinate according to the ordinate of the reference coordinate and the height of the integral region corresponding to the third target region;
optionally, the vertex of the upper left corner of the adjusted second target area is selected as the reference coordinate, and then the ordinate of the reference coordinate is added with the height of the whole area corresponding to the third target area, so that a new ordinate can be obtained, and the ordinate is the ordinate of the vertex coordinate of the lower left corner of the whole area of the same second target area.
S603, determining the whole area of the same second target area according to the new abscissa, the new ordinate and the reference coordinate.
Optionally, the position information of the whole area of the same second target area can be determined by reference coordinates, and then the width and height information of the whole area of the same second target area can be determined by combining the new ordinate and the reference coordinates, so as to determine the whole area of the same second target area.
According to the method and the device, the whole area of the same second target area can be accurately determined by combining the reference coordinates with the width and the height of the whole area corresponding to the third target area.
On the basis of the above embodiment, if the determined coordinates of the entire area of the same second target area are beyond the range of the image to be detected, the exceeding coordinates are required to be truncated.
In one embodiment, as shown in fig. 7, determining a first target area matching the upper whole area and a second target area not matching the upper whole area from the partial areas includes:
s701, determining the intersection ratio of each local area and the whole area of at least one target object.
Wherein the intersection ratio is the ratio of the intersection and union of the local area and the whole area.
Optionally, by combining an image recognition algorithm, the intersection area and the union area of each local area and the whole area are recognized, and the intersection ratio of each local area and the whole area of at least one target object can be obtained by calculating the ratio of the intersection area to the union area.
S702, taking the local area corresponding to the cross ratio greater than or equal to the preset cross ratio threshold as a first target area of the integral area on the matching.
The preset intersection ratio threshold value is a preset intersection and union ratio threshold value.
Optionally, comparing the calculated cross-over ratio with a preset cross-over ratio threshold, and if the calculated cross-over ratio is greater than or equal to the preset cross-over ratio threshold, taking the local area as the first target area.
And S703, taking the local area corresponding to the cross ratio smaller than the preset cross ratio threshold as a second target area of the unmatched upper whole area.
Optionally, comparing the calculated cross-over ratio with a preset cross-over ratio threshold, and if the calculated cross-over ratio is smaller than the preset cross-over ratio threshold, taking the local area as the second target area.
On the basis of the embodiment, the method and the device can directly identify the target area and the whole area, and can solve the problem that the whole object is shielded and the detection is missed when only the target area is leaked.
Optionally, on the basis of the foregoing embodiment, as shown in fig. 8, this embodiment provides an optional real-time manner of a target detection method, including the following steps:
s801, acquiring a local area of each object in an image to be detected and an integral area of at least one object in each object;
s802, determining the intersection ratio of each local area and the whole area of at least one target object;
the local area corresponding to the cross ratio greater than or equal to a preset cross ratio threshold is used as a first target area of the matched upper whole area;
and taking the local area corresponding to the cross ratio smaller than the preset cross ratio threshold as a second target area of the unmatched upper whole area.
S803, determining the distance between the same second target area and each first target area;
determining a minimum distance from the distances;
If the minimum distance is smaller than the preset distance threshold, the first target area corresponding to the minimum distance is used as a third target area;
s804, adjusting the coordinates of the same second target area according to a preset adjustment rule to obtain an adjusted second target area;
s805, determining a reference coordinate according to each vertex coordinate of the adjusted second target area;
s806, determining a new abscissa according to the abscissa of the reference coordinate and the width of the whole area corresponding to the third target area;
s807, determining a new ordinate according to the ordinate of the reference coordinate and the height of the integral region corresponding to the third target region;
s808, determining the whole area of the same second target area according to the new abscissa, the new ordinate and the reference coordinate.
According to the method and the device, according to the integral area and the first target area which are successfully matched, the third target area is determined according to the distance between the same second target area and each first target area which are successfully matched, when the distance is smaller than the set distance threshold, and finally the integral area of the same second target area is determined according to the integral areas corresponding to the second target area and the third target area, so that the problem that the integral area of the target object is easy to cause object missing under the condition of being blocked is solved.
Based on the same inventive concept, the embodiment of the application also provides an object detection device for realizing the above-mentioned object detection method. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitation in the embodiment of the object detection device provided below may be referred to the limitation of the object detection method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 9, there is provided an object detection apparatus 1 including:
the acquisition module 10 is configured to acquire a local area of each object in the image to be detected and an overall area of at least one object in each object;
a matching module 20, configured to determine, from each local area, a first target area that matches the upper global area and a second target area that does not match the upper global area;
an analysis module 30, configured to determine a distance between the same second target area and each first target area;
the determining module 40 is configured to determine an overall area of the same second target area according to the distances between the same second target area and each first target area.
In one embodiment, to improve the accuracy of the overall region determination of the same second target region, as shown in fig. 10, the determining module 40 further includes, based on the above fig. 9:
A distance analysis sub-module 401, configured to determine a third target area from the first target areas according to the distance between the same second target area and each first target area;
the whole area determining sub-module 402 is configured to determine a whole area of the same second target area according to the whole area corresponding to the third target area and the same second target area.
In one embodiment, to accurately determine the third target area, as shown in fig. 11, on the basis of fig. 10 above, the distance analysis sub-module 401 further includes:
a distance sorting unit 4011 for determining a minimum distance from the distances;
the distance determining unit 4012 is configured to, if the minimum distance is smaller than the preset distance threshold, take the first target area corresponding to the minimum distance as the third target area.
In one embodiment, to further improve the accuracy of determining the entire area of the same second target area, as shown in fig. 12, the entire area determining sub-module 402 further includes, on the basis of fig. 10 above:
the area adjustment unit 4021 is configured to adjust coordinates of the same second target area according to a preset adjustment rule to obtain an adjusted second target area;
The determining unit 4022 is configured to determine an overall area of the same second target area according to the overall area corresponding to the third target area and the adjusted second target area.
In one embodiment, in order to further improve the accuracy of determining the entire area of the same second target area, the determining unit 4022 is specifically configured to:
determining a reference coordinate according to each vertex coordinate of the adjusted second target area;
determining a new abscissa according to the abscissa of the reference coordinate and the width of the whole area corresponding to the third target area;
determining a new ordinate according to the ordinate of the reference coordinate and the height of the integral region corresponding to the third target region;
and determining the whole area of the same second target area according to the new abscissa, the new ordinate and the reference coordinate.
In one embodiment, as shown in fig. 13, on the basis of fig. 9 above, the matching module 20 further includes:
an intersection ratio analysis unit 201, configured to determine an intersection ratio of each local area and an entire area of at least one target object;
a first matching unit 202, configured to use a local area corresponding to a blending ratio greater than or equal to a preset blending ratio threshold as a first target area for matching an upper whole area;
And a second matching unit 203, configured to take a local area corresponding to the cross ratio smaller than the preset cross ratio threshold as a second target area of the unmatched upper whole area.
The respective modules in the above-described object detection apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 14. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing spectral signature data. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of object detection.
It will be appreciated by those skilled in the art that the structure shown in fig. 14 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring a local area of each target object in an image to be detected and an integral area of at least one target object in each target object;
determining a first target area matched with the upper whole area and a second target area not matched with the upper whole area from each local area;
determining the distance between the same second target area and each first target area;
and determining the whole area of the same second target area according to the distance between the same second target area and each first target area.
In one embodiment, the processor when executing the computer program further performs the steps of: determining the whole area of the same second target area according to the distance between the same second target area and each first target area, wherein the method comprises the following steps:
Determining a third target area from the first target areas according to the distance between the same second target area and the first target areas;
and determining the whole area of the same second target area according to the whole area corresponding to the third target area and the same second target area.
In one embodiment, the processor when executing the computer program further performs the steps of: determining a third target area from the first target areas according to the distance between the same second target area and the first target areas, comprising:
determining a minimum distance from the distances;
and if the minimum distance is smaller than the preset distance threshold value, taking the first target area corresponding to the minimum distance as a third target area.
In one embodiment, the processor when executing the computer program further performs the steps of: determining the whole area of the same second target area according to the whole area corresponding to the third target area and the same second target area, wherein the method comprises the following steps:
according to a preset adjustment rule, adjusting the coordinates of the same second target area to obtain an adjusted second target area;
and determining the whole area of the same second target area according to the whole area corresponding to the third target area and the adjusted second target area.
In one embodiment, the processor when executing the computer program further performs the steps of: determining the whole area of the same second target area according to the whole area corresponding to the third target area and the adjusted second target area, wherein the method comprises the following steps:
determining a reference coordinate according to each vertex coordinate of the adjusted second target area;
and determining the whole area of the same second target area according to the reference coordinates and the size information of the whole area corresponding to the third target area.
In one embodiment, the processor when executing the computer program further performs the steps of: determining the whole area of the same second target area according to the reference coordinates and the size information of the whole area corresponding to the third target area, wherein the method comprises the following steps:
determining a new abscissa according to the abscissa of the reference coordinate and the width of the whole area corresponding to the third target area;
determining a new ordinate according to the ordinate of the reference coordinate and the height of the integral region corresponding to the third target region;
and determining the whole area of the same second target area according to the new abscissa, the new ordinate and the reference coordinate.
In one embodiment, the processor when executing the computer program further performs the steps of: determining a first target region matching the upper global region and a second target region not matching the upper global region from the local regions, comprising:
Determining the intersection ratio of each local area and the whole area of at least one target object;
the local area corresponding to the cross ratio greater than or equal to a preset cross ratio threshold is used as a first target area of the matched upper whole area;
and taking the local area corresponding to the cross ratio smaller than the preset cross ratio threshold as a second target area of the unmatched upper whole area.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a local area of each target object in an image to be detected and an integral area of at least one target object in each target object;
determining a first target area matched with the upper whole area and a second target area not matched with the upper whole area from each local area;
determining the distance between the same second target area and each first target area;
and determining the whole area of the same second target area according to the distance between the same second target area and each first target area.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining the whole area of the same second target area according to the distance between the same second target area and each first target area, wherein the method comprises the following steps:
Determining a third target area from the first target areas according to the distance between the same second target area and the first target areas;
and determining the whole area of the same second target area according to the whole area corresponding to the third target area and the same second target area.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining a third target area from the first target areas according to the distance between the same second target area and the first target areas, comprising:
determining a minimum distance from the distances;
and if the minimum distance is smaller than the preset distance threshold value, taking the first target area corresponding to the minimum distance as a third target area.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining the whole area of the same second target area according to the whole area corresponding to the third target area and the same second target area, wherein the method comprises the following steps:
according to a preset adjustment rule, adjusting the coordinates of the same second target area to obtain an adjusted second target area;
and determining the whole area of the same second target area according to the whole area corresponding to the third target area and the adjusted second target area.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining the whole area of the same second target area according to the whole area corresponding to the third target area and the adjusted second target area, wherein the method comprises the following steps:
determining a reference coordinate according to each vertex coordinate of the adjusted second target area;
and determining the whole area of the same second target area according to the reference coordinates and the size information of the whole area corresponding to the third target area.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining the whole area of the same second target area according to the reference coordinates and the size information of the whole area corresponding to the third target area, wherein the method comprises the following steps:
determining a new abscissa according to the abscissa of the reference coordinate and the width of the whole area corresponding to the third target area;
determining a new ordinate according to the ordinate of the reference coordinate and the height of the integral region corresponding to the third target region;
and determining the whole area of the same second target area according to the new abscissa, the new ordinate and the reference coordinate.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining a first target region matching the upper global region and a second target region not matching the upper global region from the local regions, comprising:
Determining the intersection ratio of each local area and the whole area of at least one target object;
the local area corresponding to the cross ratio greater than or equal to a preset cross ratio threshold is used as a first target area of the matched upper whole area;
and taking the local area corresponding to the cross ratio smaller than the preset cross ratio threshold as a second target area of the unmatched upper whole area.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
acquiring a local area of each target object in an image to be detected and an integral area of at least one target object in each target object;
determining a first target area matched with the upper whole area and a second target area not matched with the upper whole area from each local area;
determining the distance between the same second target area and each first target area;
and determining the whole area of the same second target area according to the distance between the same second target area and each first target area.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining the whole area of the same second target area according to the distance between the same second target area and each first target area, wherein the method comprises the following steps:
Determining a third target area from the first target areas according to the distance between the same second target area and the first target areas;
and determining the whole area of the same second target area according to the whole area corresponding to the third target area and the same second target area.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining a third target area from the first target areas according to the distance between the same second target area and the first target areas, comprising:
determining a minimum distance from the distances;
and if the minimum distance is smaller than the preset distance threshold value, taking the first target area corresponding to the minimum distance as a third target area.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining the whole area of the same second target area according to the whole area corresponding to the third target area and the same second target area, wherein the method comprises the following steps:
according to a preset adjustment rule, adjusting the coordinates of the same second target area to obtain an adjusted second target area;
and determining the whole area of the same second target area according to the whole area corresponding to the third target area and the adjusted second target area.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining the whole area of the same second target area according to the whole area corresponding to the third target area and the adjusted second target area, wherein the method comprises the following steps:
determining a reference coordinate according to each vertex coordinate of the adjusted second target area;
and determining the whole area of the same second target area according to the reference coordinates and the size information of the whole area corresponding to the third target area.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining the whole area of the same second target area according to the reference coordinates and the size information of the whole area corresponding to the third target area, wherein the method comprises the following steps:
determining a new abscissa according to the abscissa of the reference coordinate and the width of the whole area corresponding to the third target area;
determining a new ordinate according to the ordinate of the reference coordinate and the height of the integral region corresponding to the third target region;
and determining the whole area of the same second target area according to the new abscissa, the new ordinate and the reference coordinate.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining a first target region matching the upper global region and a second target region not matching the upper global region from the local regions, comprising:
Determining the intersection ratio of each local area and the whole area of at least one target object;
the local area corresponding to the cross ratio greater than or equal to a preset cross ratio threshold is used as a first target area of the matched upper whole area;
and taking the local area corresponding to the cross ratio smaller than the preset cross ratio threshold as a second target area of the unmatched upper whole area.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method of target detection, the method comprising:
acquiring a local area of each target object in an image to be detected and a whole area of at least one target object in each target object;
determining a first target area matched with the whole area and a second target area not matched with the whole area from the local areas;
determining the distance between the same second target area and each first target area;
And determining the whole area of the same second target area according to the distance between the same second target area and each first target area.
2. The method of claim 1, wherein determining the entire area of the same second target area based on the distance of the same second target area from each of the first target areas, comprises:
determining a third target area from the first target areas according to the distance between the same second target area and the first target areas;
and determining the whole area of the same second target area according to the whole area corresponding to the third target area and the same second target area.
3. The method of claim 2, wherein determining a third target area from each of the first target areas based on the distance of the same second target area from each of the first target areas, comprises:
determining a minimum distance from each of the distances;
and if the minimum distance is smaller than a preset distance threshold, taking the first target area corresponding to the minimum distance as the third target area.
4. The method according to claim 2, wherein determining the entire area of the same second target area according to the entire area corresponding to the third target area and the same second target area includes:
according to a preset adjustment rule, adjusting the coordinates of the same second target area to obtain an adjusted second target area;
and determining the whole area of the same second target area according to the whole area corresponding to the third target area and the adjusted second target area.
5. The method of claim 4, wherein determining the entire area of the same second target area based on the entire area corresponding to the third target area and the adjusted second target area comprises:
determining a reference coordinate according to each vertex coordinate of the adjusted second target area;
and determining the whole area of the same second target area according to the reference coordinates and the size information of the whole area corresponding to the third target area.
6. The method according to claim 5, wherein determining the entire area of the same second target area according to the reference coordinates and the size information of the entire area corresponding to the third target area includes:
Determining a new abscissa according to the abscissa of the reference coordinate and the width of the whole area corresponding to the third target area;
determining a new ordinate according to the ordinate of the reference coordinate and the height of the integral region corresponding to the third target region;
and determining the whole area of the same second target area according to the new abscissa, the new ordinate and the reference coordinate.
7. The method of any of claims 1-6, wherein determining from each of the local regions a first target region that matches the global region and a second target region that does not match the global region comprises:
determining the intersection ratio of each local area and the whole area of the at least one target object;
the local area corresponding to the cross ratio greater than or equal to a preset cross ratio threshold is used as a first target area matched with the whole area;
and taking the local area corresponding to the cross ratio smaller than the preset cross ratio threshold as a second target area which is not matched with the whole area.
8. An object detection apparatus, comprising:
the acquisition module is used for acquiring a local area of each target object in the image to be detected and an integral area of at least one target object in each target object;
A matching module, configured to determine, from each of the local areas, a first target area that matches the entire area and a second target area that does not match the entire area;
the analysis module is used for determining the distance between the same second target area and each first target area;
and the determining module is used for determining the whole area of the same second target area according to the distance between the same second target area and each first target area.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the object detection method of any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor realizes the steps of the object detection method according to any of claims 1 to 7.
CN202310139781.1A 2023-02-20 2023-02-20 Target detection method, device, computer equipment and storage medium Pending CN116071594A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310139781.1A CN116071594A (en) 2023-02-20 2023-02-20 Target detection method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310139781.1A CN116071594A (en) 2023-02-20 2023-02-20 Target detection method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116071594A true CN116071594A (en) 2023-05-05

Family

ID=86181901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310139781.1A Pending CN116071594A (en) 2023-02-20 2023-02-20 Target detection method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116071594A (en)

Similar Documents

Publication Publication Date Title
US20160063754A1 (en) System and Method for Detecting a Structural Opening in a Three Dimensional Point Cloud
CN107239794B (en) Point cloud data segmentation method and terminal
WO2023201924A1 (en) Object defect detection method and apparatus, and computer device and storage medium
CN111091123A (en) Text region detection method and equipment
US20230192462A1 (en) Method for forklift pickup, computer device, and non-volatile storage medium
US8068673B2 (en) Rapid and high precision centroiding method and system for spots image
WO2023185234A1 (en) Image processing method and apparatus, and electronic device and storage medium
CN115526990A (en) Target visualization method and device for digital twins and electronic equipment
CN115526892A (en) Image defect duplicate removal detection method and device based on three-dimensional reconstruction
CN115578468A (en) External parameter calibration method and device, computer equipment and storage medium
CN113808142B (en) Ground identification recognition method and device and electronic equipment
CN117333440A (en) Power transmission and distribution line defect detection method, device, equipment, medium and program product
CN116071594A (en) Target detection method, device, computer equipment and storage medium
CN110663046B (en) Hardware accelerator for directional gradient histogram calculation
CN115830073A (en) Map element reconstruction method, map element reconstruction device, computer equipment and storage medium
JP2023092446A (en) Cargo counting method and apparatus, computer apparatus, and storage medium
CN114648639A (en) Target vehicle detection method, system and device
CN112241675A (en) Object detection model training method and device
CN112131904A (en) Multi-target cross-mirror tracking method, device, equipment and medium based on graph matching
CN115994955B (en) Camera external parameter calibration method and device and vehicle
CN115223110B (en) Target detection method, device, computer equipment and storage medium
Chu et al. Convergent application for trace elimination of dynamic objects from accumulated lidar point clouds
CN117710488B (en) Camera internal parameter calibration method, device, computer equipment and storage medium
CN116579960B (en) Geospatial data fusion method
CN116442226B (en) Pose correctness judging method, pose correctness judging device, robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination