CN117496203A - Object matching method and device, storage medium and electronic device - Google Patents

Object matching method and device, storage medium and electronic device Download PDF

Info

Publication number
CN117496203A
CN117496203A CN202311397130.9A CN202311397130A CN117496203A CN 117496203 A CN117496203 A CN 117496203A CN 202311397130 A CN202311397130 A CN 202311397130A CN 117496203 A CN117496203 A CN 117496203A
Authority
CN
China
Prior art keywords
target
target object
matching
cloud point
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311397130.9A
Other languages
Chinese (zh)
Inventor
林亦宁
吴振宙
王嘉男
黄盛明
王贝贝
沈康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Shanma Zhiqing Technology Co Ltd
Shanghai Supremind Intelligent Technology Co Ltd
Original Assignee
Hangzhou Shanma Zhiqing Technology Co Ltd
Shanghai Supremind Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Shanma Zhiqing Technology Co Ltd, Shanghai Supremind Intelligent Technology Co Ltd filed Critical Hangzhou Shanma Zhiqing Technology Co Ltd
Priority to CN202311397130.9A priority Critical patent/CN117496203A/en
Publication of CN117496203A publication Critical patent/CN117496203A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a method and a device for matching objects, a storage medium and an electronic device, wherein the method comprises the following steps: acquiring N images acquired by N acquisition devices at a target moment, wherein the N acquisition devices are respectively arranged in N directions of a target area, and the acquisition devices in each direction acquire a corresponding image in the N images, and N is larger than 1; identifying target objects in the N images to obtain N target object sets, wherein each target object set comprises target objects identified in the images corresponding to the target object sets, and the target object sets are in one-to-one correspondence with the images; and matching the target objects in the N images according to the coordinate positions of each target object in the N target object sets in the target coordinate system to obtain a matching result. The invention solves the problem of low accuracy of target object matching in the related technology.

Description

Object matching method and device, storage medium and electronic device
Technical Field
The embodiment of the invention relates to the field of map positioning, in particular to a method and a device for matching objects, a storage medium and an electronic device.
Background
With the further development of urban traffic, urban intersections often become important places for traffic violations. In order to realize monitoring of intersections, the existing urban intersection monitoring monitors the intersections through acquisition equipment arranged in the intersections and tracks vehicles. Under the condition that a plurality of acquisition devices exist, overlapping parts exist in areas for acquiring data in the acquisition devices, so that vehicles monitored in different acquisition devices can be the same vehicle, therefore, the same vehicle identified by different acquisition devices needs to be matched, the matching of the same object is completed through the appearance features and license plates of the vehicles identified in different acquisition devices in the related technology, but due to the problems of shielding and different shooting angles of the vehicles, the appearance features or license plates of the same vehicle identified in images acquired by different cameras are incomplete, and the accuracy of vehicle matching is low, wherein the vehicle is a target object. Therefore, there is a problem in the related art that the accuracy of target object matching is not high.
Aiming at the problem of low accuracy of target object matching in the related technology, no effective solution is proposed at present.
Disclosure of Invention
The embodiment of the invention provides an object matching method, an object matching device, a storage medium and an electronic device, which are used for at least solving the problem of low accuracy of target object matching in the related technology.
According to an embodiment of the present invention, there is provided a matching method of objects, including: acquiring N images acquired by N acquisition devices at a target moment, wherein the N acquisition devices are respectively arranged in N directions of a target area, and the acquisition devices in each direction acquire a corresponding image in the N images, and N is larger than 1; identifying target objects in the N images to obtain N target object sets, wherein each target object set comprises target objects identified in images corresponding to the target object sets, and the target object sets are in one-to-one correspondence with the images; and matching the target objects in the N images according to the coordinate positions of each target object in the N target object sets in the target coordinate system to obtain a matching result.
According to still another embodiment of the present invention, there is also provided a matching apparatus of an object, including: the acquisition module is used for acquiring N images acquired by N acquisition devices at the target moment, wherein the N acquisition devices are respectively arranged in N directions of a target area, the acquisition device in each direction acquires a corresponding image in the N images, and N is larger than 1; the identification module is used for identifying target objects in the N images to obtain N target object sets, wherein each target object set comprises target objects identified in images corresponding to the target object sets, and the target object sets are in one-to-one correspondence with the images; and the matching module is used for matching the target objects in the N images according to the coordinate positions of each target object in the N target object sets in the target coordinate system to obtain a matching result.
According to a further embodiment of the invention, there is also provided a computer readable storage medium having stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
According to a further embodiment of the invention, there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
According to the invention, the N images acquired by the N acquisition devices at the target moment are acquired, the target objects in the N images are identified to obtain N target object sets, and the target objects in the N images are matched according to the coordinate positions of each target object in the N target object sets in the target coordinate system to obtain a matching result, so that the problem of low accuracy of target object matching in the related art is solved, and the effect of improving the target object matching accuracy is achieved.
Drawings
Fig. 1 is a block diagram of a hardware structure of a mobile terminal of a matching method of an object according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of matching objects according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a target area according to an embodiment of the invention;
FIG. 4 is a schematic view of the range of data acquired by an acquisition device according to an embodiment of the invention;
FIG. 5 is a schematic diagram of target object matching according to an embodiment of the invention;
FIG. 6 is a schematic diagram of matching cloud points in a set of target cloud points in accordance with an embodiment of the invention;
fig. 7 is a block diagram of a structure of a matching apparatus of an object according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The object matching method provided in the embodiments of the present application may be implemented in a mobile terminal, a computer terminal, or a similar computing device. Taking the mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal according to an object matching method in an embodiment of the present invention. As shown in fig. 1, a mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, wherein the mobile terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a matching method of objects in an embodiment of the present invention, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, implement the above-mentioned method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the mobile terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
In this embodiment, a method for matching objects is provided, fig. 2 is a flowchart of a method for matching objects according to an embodiment of the present invention, and as shown in fig. 2, the method includes the following steps:
s202, acquiring N images acquired by N acquisition devices at a target moment, wherein the N acquisition devices are respectively arranged in N directions of a target area, and the acquisition devices in each direction acquire a corresponding image in the N images, and N is larger than 1;
in this embodiment, the target area may be an intersection where a plurality of roads intersect, for example, an intersection, where one capturing device is placed in each direction in the target area, and N capturing devices are placed in N directions, and an image is captured by the capturing devices. Fig. 3 is a schematic view of a target area according to an embodiment of the present invention, as shown in fig. 3, the target area is divided into four directions, and one acquisition device is respectively disposed in the four directions, and the disposed acquisition device is A, B, C, D in fig. 3.
Wherein, acquisition devices fall into two categories: one type is an image pickup apparatus, and one type is a radar apparatus. Shooting by using an imaging device, wherein the obtained image is a digital image, and then completing the identification of a target object by carrying out image identification on the digital image; the image acquired by the radar equipment is a point cloud image, namely, the radar equipment scans the surrounding environment through a laser radar to acquire a large amount of point cloud data, the point cloud data are projected into the same picture to obtain the point cloud image, and then the target object is identified by clustering the point clouds in the point cloud image.
S204, identifying target objects in the N images to obtain N target object sets, wherein each target object set comprises target objects identified in images corresponding to the target object sets, and the target object sets are in one-to-one correspondence with the images;
in this embodiment, for N images acquired at the same time, the target objects in the N images are respectively identified, and the target objects may be vehicles in the images. The target objects in each image form a target object set, namely N target object sets are in one-to-one correspondence with N images.
S206, matching the target objects in the N images according to the coordinate positions of each target object in the N target object sets in the target coordinate system to obtain a matching result.
In this embodiment, the target coordinate system may be a world coordinate system, and the coordinate positions of the target objects in the target coordinate system may be represented by longitude and latitude, that is, the target objects in different target object sets may be matched according to the longitude and latitude of each target object.
The range of data collected by each of the N collection devices set in the target area is not the same, where the range of data collected by any of the multiple collection devices may overlap, i.e., a vehicle appearing in the target area may appear in images collected by the multiple collection devices, matching the target objects is to match target objects identified as identifying the same object in different images, fig. 4 is a schematic view of the range of data collected by the collection devices according to an embodiment of the present invention, the collection range of the collection device A, B, C, D is as shown in fig. 4, an overlapping portion appears, X, Y, Z in fig. 4 is a target object in the collection range, X appears only in the collection range of the collection device a, Y appears in the collection range of the collection device A, B, C, D, and Z appears in the collection ranges of the collection devices D and B. The target objects in the images acquired in the acquisition device A are identified as X (A) and Y (A), the target objects in the images acquired in the acquisition device B are identified as Y (B) and Z (B), the target object in the images acquired in the acquisition device C is identified as Y (C), the target object in the images acquired in the acquisition device D is identified as Y (D) and Z (D), and matching the target objects means that the Y (A), the Y (B), the Y (C) and the Y (D) are determined as the same target object and the Z (B) and the Z (D) are determined as the same target object.
Matching the target objects in the N images is divided into two cases according to different acquisition equipment:
case one: under the condition that the acquisition equipment is image pickup equipment, matching the target objects in the N images according to the coordinate positions of each target object in the N target object sets in the target coordinate system, wherein the obtaining of a matching result comprises the following steps: starting from i=1, executing a loop operation, ending the loop operation when i=n, and determining an nth matching object set obtained when the loop operation is ended as the matching result, wherein the nth matching object set comprises the target objects matched with each other in N target object sets and target objects failing to match, and the loop operation is as follows: according to the coordinate position of each target object in the N target object sets in the target coordinate system, respectively matching the target object in the ith matching object set with the target object in the (i+1) th target object set in the N target object sets to obtain the (i+1) th matching object set, wherein when i=1, the (1) th matching object set is the (1) th target object set in the N target object sets, and when i is greater than 1, the (i) th matching object set comprises the (i-1) th matching object set, the (i) th target object which is matched with the (i) th target object set and the (i+1) th target object set, and the (i+1) th matching object set records the (i+1) th matching object set and the (i+1) th target object set) mutually matched; and performing an add 1 operation on i.
In this embodiment, the N target object sets are ordered, where the ordering may be an arbitrary ordering manner, the N ordered target object sets are sequentially matched, starting from the 1 st target object set, using the 1 st target object set as the 1 st matching object set, matching the 1 st matching object set with the 2 nd target object set to obtain the 2 nd matching object set, after execution, matching the 2 nd matching object set with the 3 rd target object set to obtain the 3 rd matching object set, and sequentially matching until the N-1 st matching object set is matched with the N target object set to obtain the N matching object set, using the N matching object set as a matching result, and completing the matching process of the target objects in the N images.
Fig. 5 is a schematic diagram of matching target objects according to an embodiment of the present invention, and as shown in fig. 5, taking n=4 as an example, identifying target objects in 4 images captured by a capturing device in four directions of a target area to obtain 4 target object sets, set 1, set 2, set 3, and set 4 are respectively obtained target object sets identified on images captured by a capturing device A, B, C, D.
Matching the target object in the set 1 (the 1 st matching object set) with the target object in the set 2 (the 2 nd matching object set) to obtain a set 12 (the 2 nd matching object set); matching the target object in the set 12 (the 2 nd matching object set) with the target object in the set 3 (the 3 rd matching object set) to obtain a set 123 (the 3 rd matching object set); the target object in set 123 (3 rd set of matching objects) is matched with the target object in set 4 (4 th set of target objects) to obtain set 1234 (4 th set of matching objects).
Then the set 1234 is determined as a matching result of matching target objects in the four images, where the set 1234 includes a target object that is successfully matched and a target object that is failed to be matched, where the successfully matched object refers to the target object recorded in a different target object set indicating the same target object in the target area, for example, in fig. 5, the target objects included in the set 1 are: a1, A2 and A3, wherein the target objects included in the set 2 are as follows: b1, B2 and B3, wherein the target objects included in the set 3 are as follows: c1, C2, the target objects included in set 4 are: D1.
the set 1 is matched with the set 2 to obtain a set 12, the target objects A1 (or B1), A2, A3, B2 and B3 recorded in the set 12 are the same target object, namely, the target object A1 recorded in the set 1 and the target object B1 recorded in the set 2 are matched with each other, namely, the matched vehicles are successfully matched, only one target object of the A1 and the B1 is recorded in the set 12, and the different target objects in the target area, namely, the target objects with failed matching are indicated by the A2, the A3, the B2 and the B3. The target objects of the matching result (4 th matching object set) record obtained according to the matching process described in fig. 5 are: a1 (or B1 or C2 or D1), A2, A3, B2-C1, B3, the 4 th set of matching objects includes four target objects.
In an optional embodiment, matching the target object in the ith matching object set with the target object in the (i+1) th target object set in the N target object sets to obtain the (i+1) th matching object set, including: performing a target operation on each target object in the ith matching object set, wherein the target object when performing the target operation is called a current target object: according to the coordinate position of the current target object in the target coordinate system and the coordinate position of each target object in the (i+1) th target object set in the target coordinate system, determining an object closest to the current target object in the (i+1) th target object set as a first target object; under the condition that the current target object and the first target object meet preset conditions, determining that the first target object and the current target object are matched with each other; and recording the first target object or the current target object in the (i+1) th matched object set under the condition that the first target object and the current target object are matched with each other. After the first target object and the current target object are determined to match each other, deleting the first target object from the i+1th target object set.
And under the condition that the current target object and the first target object do not meet the preset condition, determining the current target object as an object with failed matching, and recording the current target object in the (i+1) th matching object set.
After the target operation is executed on each target object in the ith matching object set, the remaining target objects in the (i+1) th target object set are determined to be objects with matching failure, and the objects with matching failure in the (i+1) th target object set are recorded in the (i+1) th matching object set.
In this embodiment, when matching the i-th matching object set and the target objects in the i+1th target object set, the target object sets recorded in the two sets are actually matched. Taking the matching of the set 5 and the set 6 as an example, the target objects recorded in the set 5 are E1 and E2, and the target objects recorded in the set 6 are F1, F2 and F3.
By setting a limiting condition, whether two target objects from two sets are matched or not is determined, taking E1 and F2 matching each other as an example, the following limiting condition is satisfied by E1 and F2, the distance between E1 and F2 in the target coordinate system is smaller than a first preset threshold, the target object closest to E1 in set 6 is F2, and the target object closest to F2 in set 5 is E1.
Specifically, matching the target objects in set 5 and set 6 is to perform a target operation on each target object in set 5:
when the target operation is executed on the target object E1 in the set 5, E1 is the current target object, and the object closest to E1 (the current target object) is found in the set 6 to be F2 (the first target object); when E1 and F2 satisfy the preset condition, so that E1 and F2 match each other, E1 and F2 are target objects for which matching is successful, and any one of the target objects E1 and F2 is recorded in the set 56 (corresponding to the i+1th matching object set).
When the target operation is performed on the target object E2 in the set 5, E2 is the current target object, the object closest to E2 (the current target object) is found in the set 6 to be F2 (the first target object), when E2 and F2 do not meet the preset condition, E1 and F2 are not matched with each other, E1 is the target object of the matching failure, and E1 is recorded in the set 56 (corresponding to the i+1th matching object set).
After determining that E1 and F2 match each other, F2 is deleted from the set 6, and after performing the target operation on all target objects in the set 5, the remaining target objects (F1 and F3) in the set 6 are recorded in the set 56, that is, F1 and F3 are determined as objects that fail to match, so that the target objects included in the set 56 obtained after matching the sets 5 and 6 are E1 (or F2), E2, F1, and F3.
Whether the current target object and the first target object meet the preset condition refers to that an object closest to the first target object in the ith matching object set is not the current target object, and whether the current target object and the first target object meet the preset condition is determined by the following method: according to the coordinate position of the first target object in the target coordinate system and the coordinate position of each target object in the i-th matching object set in the target coordinate system, determining an object closest to the first target object in the i-th matching object set as a second target object; and under the condition that the second target object and the current target object are the same object and the distance between the current target object and the first target object is smaller than or equal to a first preset threshold value, determining that the current target object and the first target object meet the preset condition.
The method comprises the steps of searching an object closest to a first target object in an ith matching object set, recording the object as a second target object, and if the current target object and the second target object are the same object and the distance between the current target object and the second target object in a target coordinate system is smaller than a first preset threshold value, enabling the current target object and the first target object to meet the preset condition, otherwise, enabling the current target object and the first target object not to meet the preset condition. In the above example, the object closest to E1 (the current target object) is found in set 6 to be F2, the distances of E1 and F2 in the coordinate system are smaller than the first preset threshold, and the object closest to F2 in set 5 is E1 (the second target object), so the second target object is the same object as the current target object, and thus E1 and F2 satisfy the preset condition. In the above example, the object closest to E2 (the current target object) is found in set 6 to be F2, the distances of E2 and F2 in the coordinate system are smaller than the first preset threshold, but the object closest to F2 in set 5 is E1 (the second target object), E1 and E2 are different target objects, and thus the second target object is the same object as the current target object, and thus E2 and F2 do not satisfy the preset condition.
Optionally, in the case that the acquisition device is an image capturing device, after identifying the target objects in the N images to obtain N target object sets, the method further includes: performing the following operation on each target object set in the N target object sets, where the target object set when performing the following operation is called a current target object set: acquiring pixel coordinates of each target object in the current target object set in a current image corresponding to the current target object set; acquiring a current mapping matrix corresponding to the current image, wherein the current mapping matrix is a mapping matrix converted from the pixel coordinate system to the target coordinate system; and obtaining the coordinate position of each target object in the target coordinate system in the current target object set according to the pixel coordinates of each target object in the current image and the current mapping matrix. Before the current mapping matrix corresponding to the current image is acquired, the method further comprises: acquiring a preset image, and selecting a plurality of points in the preset image, wherein the preset image and the current image are images shot by the same shooting equipment; acquiring pixel coordinates of the plurality of points in the preset image and coordinate positions of the plurality of points in a target coordinate system; and determining the current mapping matrix according to the pixel coordinates of the plurality of points in the preset image and the coordinate positions of the plurality of points in a target coordinate system.
In the case that the acquisition device is an image capturing device, when a plurality of image capturing devices acquire a plurality of images at the same time, after identifying a target object in each image, the obtained pixel coordinates of the target object in the image need to be converted into coordinates in the same coordinate system (i.e., a target coordinate system) to complete matching of the target object in the image.
The mapping relation between the pixel coordinates in the images shot by each image shooting device and the coordinates in the target coordinate system is represented by a mapping matrix, and the corresponding mapping matrixes in the images shot by different image shooting devices are different. And mapping the target objects recorded in each target object set into a target coordinate system through different mapping matrixes corresponding to different images.
Before coordinate conversion, mapping calibration is needed to be carried out on images acquired by the camera equipment in advance, so that mapping matrixes corresponding to the images acquired by different camera equipment are obtained. Selecting a plurality of point positions in the shooting range of the shooting equipment, and holding the positioning equipment to the actual space positions corresponding to the point positions by a worker so as to obtain the longitude and latitude of the plurality of point positions, namely the coordinate positions in a target coordinate system; and pixel coordinates of several point positions distributed on an image (preset image) of the photographing apparatus are obtained at the same time, so that a mapping matrix corresponding to the image photographed by the photographing apparatus is obtained according to the pixel coordinates of a plurality of positions on the image and the coordinate positions in a target coordinate system.
And a second case: under the condition that the acquisition equipment is radar equipment, matching the target objects in the N images according to the coordinate positions of each target object in the N target object sets in the target coordinate system to obtain a matching result, wherein the matching result comprises the following steps: mapping cloud points of each target object in the N target object sets into the target coordinate system to obtain a target cloud point set; and circularly executing the following operations until all cloud points in the target cloud point set are completely matched, and stopping executing the following operations to obtain the matching result: selecting one cloud point from unmatched cloud points in the target cloud point set as an initial cloud point; determining cloud points, of which the distance from the initial cloud points in the target cloud point set is smaller than or equal to a second preset threshold, as target cloud points; and under the condition that the target cloud point and the initial cloud point meet a second preset condition, determining that the initial cloud point and the target cloud point are cloud points acquired by radars in different directions for the same target object.
Under the condition that the target cloud point and the initial cloud point meet a second preset condition, determining that the initial cloud point and the target cloud point are cloud points acquired by radars in different directions for the same target object comprises the following steps: tracking the target object corresponding to the target cloud point and the initial cloud point to obtain the coordinate positions of the target cloud point and the initial cloud point in the target coordinate system; and determining that the target cloud point and the initial cloud point meet the second preset condition under the condition that the distances between the target cloud point and the initial cloud point in the target coordinate system are smaller than or equal to a third preset threshold value within a preset duration threshold value.
In this embodiment, in an image acquired by a radar, a plurality of target objects are extracted by means of point cloud clustering, and each target object is presented in a cloud point manner, so as to obtain a target object set, i.e., each target object set includes one or more cloud points. It is necessary to distinguish which of these points are identical and which are of another vehicle, in particular:
first, uniformly mapping target objects of N target object sets into a target coordinate system to obtain a target cloud point set. And matching the cloud points in the target cloud point set, namely judging which cloud points in the target cloud point set are the same target object.
Fig. 6 is a schematic diagram of matching cloud points in a target cloud point set according to an embodiment of the present invention, where, as shown in fig. 6, cloud points in the box in fig. 6 are cloud points that match each other. And selecting one cloud point from the target cloud point set as an initial cloud point, determining the cloud point with the distance from the initial cloud point smaller than a second preset threshold value as a target cloud point, namely the target cloud point is likely to be the cloud point of the same target object as the initial cloud point, and further judging whether the target cloud point is the cloud point of the same target object as the initial cloud point according to whether the target cloud point and the initial cloud point meet a second preset condition.
Wherein the second preset condition includes: and tracking target objects corresponding to the initial cloud point and the target cloud point respectively, and determining that the initial cloud point and the target cloud point meet a second preset condition when the distance between the target cloud point and the initial cloud point is always smaller than a third preset threshold value within a preset time threshold value after the target moment.
Through the embodiment, the N images acquired by the N acquisition devices at the target moment are acquired, the target objects in the N images are identified to obtain N target object sets, and the target objects in the N images are matched according to the coordinate positions of each target object in the N target object sets in the target coordinate system to obtain a matching result, so that the problem of low accuracy of target object matching in the related technology is solved, and the effect of improving the target object matching accuracy is achieved.
In an alternative embodiment, after matching the target objects in the N images according to the coordinate positions of each target object in the N target object sets in the target coordinate system, the method further includes: and executing a checking operation on the matching result according to the course angle of each target object in the N target object sets to obtain the target matching result.
And tracking the matched target objects in each target object set, calculating the course angle of the target objects in the target object set, judging that the target object is another target object if the course angle of the fourth target object in the mutually matched target objects is larger than a certain angle, and canceling the matching.
The reasons for calculating the course angle include: 1) The traveling direction of the vehicle needs to be rendered on the map in real time, so that the current traveling direction of the vehicle needs to be calculated; 2) The radar has course angle data, but the current environment data is missing and inaccurate, so that the course angle of the target object is calculated in a mode according to the historical track of the vehicle;
the method for calculating the heading angle comprises the following steps: selecting a history track point, searching global tracking data for any target object, finding a point with a past longitude and latitude distance exceeding a fourth preset threshold value as a starting point, wherein the end point is a current point, and calculating a course angle according to the angle of a line segment connecting the starting point and the current point; and the excessive searching of the historical data is avoided, and the data points in the preset time are selected. If the moving distance of the target object within the preset time is too short, for example, only 1 meter, the vehicle can be considered to be stationary in consideration of recognition shake and error of the vehicle position. The heading angle may also optionally be calculated from 3 dbox.
Optionally, a checking operation can be performed on the matching result through a history threshold, and the points with the farthest historic same-time track point distance differences smaller than the threshold are selected to be clustered into the same target vehicle. The history-matched target objects are preferentially matched.
In an alternative embodiment, the method further comprises: and under the condition that M target objects are matched with each other, respectively modifying the identifications of the M target objects from the initial identifications to target identifications.
In the case that the matching of the M target objects is successful, after determining the identifiers of the M target objects as target identifiers, the method further includes: and under the condition that a third target object in the M target objects is unmatched with other objects of the M target objects, modifying the identification of the third target object from the target identification to the initial identification of the third target object.
And carrying out data fusion association on the matched target object: the target objects in each target object set are assigned with one identifier, when a plurality of target objects are matched with each other, the smallest identifier is selected as a fusion identifier, namely, the target objects 5,8,9,10 are matched with each other, the identifiers of the four target objects are modified from the original initial identifier to be the target identifier 5, and if the initial identifier 5 is not matched with the target identifier, the identifier of the target identifier 5 is still modified to be 8.
Under normal conditions, if the same target object is always matched, but due to factors such as global coordinate precision, overlarge threshold value, vehicle length error, data missing mock error and the like, the matching needs to be canceled.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiment also provides an object matching device, which is used for implementing the above embodiment and the preferred implementation, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 7 is a block diagram of a structure of an object matching apparatus according to an embodiment of the present invention, as shown in fig. 7, the apparatus including:
an acquiring module 72, configured to acquire N images acquired by N acquiring devices at a target time, where the N acquiring devices are respectively disposed in N directions of a target area, and N is greater than 1, where the acquiring device in each direction acquires a corresponding image in the N images;
the identifying module 74 is configured to identify target objects in the N images to obtain N target object sets, where each target object set includes target objects identified in an image corresponding to the target object set, and the target object sets are in one-to-one correspondence with the images;
and the matching module 76 is configured to match the target objects in the N images according to the coordinate positions of each target object in the N target object sets in the target coordinate system, so as to obtain a matching result.
In an exemplary embodiment, the foregoing apparatus is further configured to, when the capturing device is an image capturing device, start from i=1, perform a loop operation, and end the loop operation when i=n, and determine, as the matching result, an nth matching object set obtained when the loop operation is ended, where the nth matching object set includes the target object that is matched with each other and a target object that is failed to be matched in the N target object sets, where the loop operation is: according to the coordinate position of each target object in the N target object sets in the target coordinate system, respectively matching the target object in the ith matching object set with the target object in the (i+1) th target object set in the N target object sets to obtain the (i+1) th matching object set, wherein when i=1, the (1) th matching object set is the (1) th target object set in the N target object sets, and when i is greater than 1, the (i) th matching object set comprises the (i-1) th matching object set, the (i) th target object which is matched with the (i) th target object set and the (i+1) th target object set, and the (i+1) th matching object set records the (i+1) th matching object set and the (i+1) th target object set) mutually matched; and performing an add 1 operation on i.
In an exemplary embodiment, the above apparatus is further configured to perform a target operation on each target object in the ith matching object set, where the target object when performing the target operation is referred to as a current target object: according to the coordinate position of the current target object in the target coordinate system and the coordinate position of each target object in the (i+1) th target object set in the target coordinate system, determining an object closest to the current target object in the (i+1) th target object set as a first target object; under the condition that the current target object and the first target object meet preset conditions, determining that the first target object and the current target object are matched with each other; and recording the first target object or the current target object in the (i+1) th matched object set under the condition that the first target object and the current target object are matched with each other.
In an exemplary embodiment, the foregoing apparatus is further configured to delete the first target object from the i+1th target object set after the determining that the first target object and the current target object match each other;
In an exemplary embodiment, the above apparatus is further configured to: and under the condition that the current target object and the first target object do not meet the preset condition, determining the current target object as an object with failed matching, and recording the current target object in the (i+1) th matching object set.
In an exemplary embodiment, the foregoing apparatus is further configured to determine, after the target operation is performed on each target object in the i+1th set of matching objects, remaining target objects in the i+1th set of target objects as objects that fail to match, and record the objects that fail to match in the i+1th set of target objects in the i+1th set of matching objects.
In an exemplary embodiment, the above apparatus is further configured to determine, in the ith set of matching objects, an object closest to the first target object as a second target object according to a coordinate position of the first target object in the target coordinate system and a coordinate position of each target object in the target coordinate system of the ith set of matching objects; and under the condition that the second target object and the current target object are the same object and the distance between the current target object and the first target object is smaller than or equal to a first preset threshold value, determining that the current target object and the first target object meet the preset condition.
In an exemplary embodiment, the foregoing apparatus is further configured to map cloud points of each target object in the N target object sets into the target coordinate system to obtain a target cloud point set; and circularly executing the following operations until all cloud points in the target cloud point set are completely matched, and stopping executing the following operations to obtain the matching result: selecting one cloud point from unmatched cloud points in the target cloud point set as an initial cloud point; determining cloud points, of which the distance from the initial cloud points in the target cloud point set is smaller than or equal to a second preset threshold, as target cloud points; and under the condition that the target cloud point and the initial cloud point meet a second preset condition, determining that the initial cloud point and the target cloud point are cloud points acquired by radars in different directions for the same target object.
In an exemplary embodiment, the device is further configured to track a target object corresponding to the target cloud point and the initial cloud point, so as to obtain coordinate positions of the target cloud point and the initial cloud point in the target coordinate system; and determining that the target cloud point and the initial cloud point meet the second preset condition under the condition that the distances between the target cloud point and the initial cloud point in the target coordinate system are smaller than or equal to a third preset threshold value within a preset duration threshold value.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Embodiments of the present invention also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
In one exemplary embodiment, the computer readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
In an exemplary embodiment, the electronic apparatus may further include a transmission device connected to the processor, and an input/output device connected to the processor.
Specific examples in this embodiment may refer to the examples described in the foregoing embodiments and the exemplary implementation, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may be implemented in program code executable by computing devices, so that they may be stored in a storage device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A method of matching objects, comprising:
acquiring N images acquired by N acquisition devices at a target moment, wherein the N acquisition devices are respectively arranged in N directions of a target area, and the acquisition devices in each direction acquire a corresponding image in the N images, and N is larger than 1;
identifying target objects in the N images to obtain N target object sets, wherein each target object set comprises target objects identified in images corresponding to the target object sets, and the target object sets are in one-to-one correspondence with the images;
and matching the target objects in the N images according to the coordinate positions of each target object in the N target object sets in the target coordinate system to obtain a matching result.
2. The method according to claim 1, wherein, in the case where the acquisition device is an image capturing device, according to a coordinate position of each target object in the N target object sets in the target coordinate system, matching the target objects in the N images, to obtain a matching result includes:
Starting from i=1, executing a loop operation, ending the loop operation when i=n, and determining an nth matching object set obtained when the loop operation is ended as the matching result, wherein the nth matching object set comprises the target objects matched with each other in N target object sets and target objects failing to match, and the loop operation is as follows:
according to the coordinate position of each target object in the N target object sets in the target coordinate system, respectively matching the target object in the ith matching object set with the target object in the (i+1) th target object set in the N target object sets to obtain the (i+1) th matching object set, wherein when i=1, the (1) th matching object set is the (1) th target object set in the N target object sets, and when i is greater than 1, the (i) th matching object set comprises the (i-1) th matching object set, the (i) th target object set, the (i+1) th target object set, and the (i+1) th target object set, wherein the (i+1) th matching object set and the (i+1) th target object set are recorded;
And performing an add 1 operation on i.
3. The method according to claim 2, wherein matching the target object in the i-th matching object set with the target object in the i+1th target object set in the N target object sets, respectively, to obtain the i+1th matching object set, includes:
performing a target operation on each target object in the ith matching object set, wherein the target object when performing the target operation is called a current target object:
according to the coordinate position of the current target object in the target coordinate system and the coordinate position of each target object in the (i+1) th target object set in the target coordinate system, determining an object closest to the current target object in the (i+1) th target object set as a first target object;
under the condition that the current target object and the first target object meet preset conditions, determining that the first target object and the current target object are matched with each other;
and recording the first target object or the current target object in the (i+1) th matched object set under the condition that the first target object and the current target object are matched with each other.
4. The method of claim 3, wherein the step of,
after said determining that the first target object and the current target object match each other, the method further comprises: deleting the first target object from the i+1th target object set;
the method further comprises the steps of: and under the condition that the current target object and the first target object do not meet the preset condition, determining the current target object as an object with failed matching, and recording the current target object in the (i+1) th matching object set.
5. A method according to claim 3, wherein after performing the target operation on each target object in the ith set of matching objects, the method further comprises:
and determining the rest target objects in the (i+1) th target object set as objects with failed matching, and recording the objects with failed matching in the (i+1) th target object set in the (i+1) th matching object set.
6. A method according to claim 3, characterized in that the method further comprises:
according to the coordinate position of the first target object in the target coordinate system and the coordinate position of each target object in the i-th matching object set in the target coordinate system, determining an object closest to the first target object in the i-th matching object set as a second target object;
And under the condition that the second target object and the current target object are the same object and the distance between the current target object and the first target object is smaller than or equal to a first preset threshold value, determining that the current target object and the first target object meet the preset condition.
7. The method according to claim 1, wherein, in the case where the acquisition device is a radar device, matching the target objects in the N images according to the coordinate positions of each target object in the N target object sets in the target coordinate system, to obtain a matching result includes:
mapping cloud points of each target object in the N target object sets into the target coordinate system to obtain a target cloud point set;
and circularly executing the following operations until all cloud points in the target cloud point set are completely matched, and stopping executing the following operations to obtain the matching result:
selecting one cloud point from unmatched cloud points in the target cloud point set as an initial cloud point;
determining cloud points, of which the distance from the initial cloud points in the target cloud point set is smaller than or equal to a second preset threshold, as target cloud points;
And under the condition that the target cloud point and the initial cloud point meet a second preset condition, determining that the initial cloud point and the target cloud point are cloud points acquired by radars in different directions for the same target object.
8. The method of claim 7, wherein determining that the initial cloud point and the target cloud point are cloud points acquired by radars in different directions for the same target object if the target cloud point and the initial cloud point satisfy a second preset condition comprises:
tracking the target object corresponding to the target cloud point and the initial cloud point to obtain the coordinate positions of the target cloud point and the initial cloud point in the target coordinate system;
and determining that the target cloud point and the initial cloud point meet the second preset condition under the condition that the distances between the target cloud point and the initial cloud point in the target coordinate system are smaller than or equal to a third preset threshold value within a preset duration threshold value.
9. An object matching apparatus, comprising:
the acquisition module is used for acquiring N images acquired by N acquisition devices at the target moment, wherein the N acquisition devices are respectively arranged in N directions of a target area, the acquisition device in each direction acquires a corresponding image in the N images, and N is larger than 1;
The identification module is used for identifying target objects in the N images to obtain N target object sets, wherein each target object set comprises target objects identified in images corresponding to the target object sets, and the target object sets are in one-to-one correspondence with the images;
and the matching module is used for matching the target objects in the N images according to the coordinate positions of each target object in the N target object sets in the target coordinate system to obtain a matching result.
10. A computer readable storage medium, characterized in that a computer program is stored in the computer readable storage medium, wherein the computer program, when being executed by a processor, implements the steps of the method according to any of the claims 1 to 8.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any one of claims 1 to 8 when the computer program is executed.
CN202311397130.9A 2023-10-25 2023-10-25 Object matching method and device, storage medium and electronic device Pending CN117496203A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311397130.9A CN117496203A (en) 2023-10-25 2023-10-25 Object matching method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311397130.9A CN117496203A (en) 2023-10-25 2023-10-25 Object matching method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN117496203A true CN117496203A (en) 2024-02-02

Family

ID=89671750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311397130.9A Pending CN117496203A (en) 2023-10-25 2023-10-25 Object matching method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN117496203A (en)

Similar Documents

Publication Publication Date Title
CN111160302A (en) Obstacle information identification method and device based on automatic driving environment
CN107909668B (en) Sign-in method and terminal equipment
EP3796262B1 (en) Method and apparatus for calibrating camera
CN110969048A (en) Target tracking method and device, electronic equipment and target tracking system
KR20190043396A (en) Method and system for generating and providing road weather information by using image data of roads
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN112183244A (en) Scene establishing method and device, storage medium and electronic device
CN112631333B (en) Target tracking method and device of unmanned aerial vehicle and image processing chip
CN112036359A (en) Method for obtaining topological information of lane line, electronic device and storage medium
CN113160272B (en) Target tracking method and device, electronic equipment and storage medium
CN113804100A (en) Method, device, equipment and storage medium for determining space coordinates of target object
CN112001357B (en) Target identification detection method and system
CN114066974A (en) Target track generation method and device, electronic equipment and medium
CN112863195A (en) Vehicle state determination method and device
CN110444026B (en) Triggering snapshot method and system for vehicle
CN112689234A (en) Indoor vehicle positioning method and device, computer equipment and storage medium
CN112633114A (en) Unmanned aerial vehicle inspection intelligent early warning method and device for building change event
CN117496203A (en) Object matching method and device, storage medium and electronic device
CN114782496A (en) Object tracking method and device, storage medium and electronic device
KR20190134916A (en) Method and apparatus for collecting floating population data on realtime road image
JP7232727B2 (en) Map data management device and map data management method
CN112150562A (en) Camera calibration method, device, equipment and computer readable storage medium
JPH10294932A (en) Digital image photographing device
CN116127165A (en) Vehicle position updating method and device, storage medium and electronic device
CN111723681B (en) Indoor road network generation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination