CN115168330A - Data processing method, device, server and storage medium - Google Patents

Data processing method, device, server and storage medium Download PDF

Info

Publication number
CN115168330A
CN115168330A CN202210835622.0A CN202210835622A CN115168330A CN 115168330 A CN115168330 A CN 115168330A CN 202210835622 A CN202210835622 A CN 202210835622A CN 115168330 A CN115168330 A CN 115168330A
Authority
CN
China
Prior art keywords
target
sensing
perception
determining
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210835622.0A
Other languages
Chinese (zh)
Inventor
杨海军
华秀敏
张然懋
周光涛
赵晓宇
李胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unicom Smart Connection Technology Ltd
Original Assignee
China Unicom Smart Connection Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unicom Smart Connection Technology Ltd filed Critical China Unicom Smart Connection Technology Ltd
Priority to CN202210835622.0A priority Critical patent/CN115168330A/en
Publication of CN115168330A publication Critical patent/CN115168330A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a data processing method, a device, a server and a storage medium, wherein the method comprises the following steps: determining position information, length information and width information of each perception target according to a perception data set acquired from a perception device, and determining distance information between each perception target and the perception device based on the position information of each perception target; determining position information optimization parameters of all perception targets according to the distance information; optimizing the position information of each sensing target based on the position information optimization parameters of each sensing target to obtain optimized position information, and determining the sensing area of each sensing target according to the optimized position information, the length information and the width information of each sensing target; determining an external area of a target vehicle according to a vehicle data set acquired from the target vehicle; determining the coincidence degree when the intersection exists between the external region of any target vehicle and the sensing region of any sensing target; whether the perception target and the target vehicle correspond is determined based on the degree of coincidence. The accuracy of the degree of coincidence is improved.

Description

Data processing method, device, server and storage medium
Technical Field
The embodiment of the invention relates to the technical field of data processing, in particular to a data processing method, a data processing device, a server and a storage medium.
Background
When a real-time traffic model is constructed based on a digital twin technology, real-time traffic data needs to be acquired. In the intelligently modified road, sensing data can be obtained based on sensing equipment, and information such as the position, the speed, the direction and the size of a sensing target can be determined through the sensing data; of course, it is also possible to acquire vehicle data based on the vehicle system and determine information such as the position, speed, direction, size, etc. of the vehicle from the vehicle data. And then constructing a real-time traffic model by a digital twin technology according to the perception data and the vehicle data. When the vehicle corresponding to the vehicle system is located in the sensing area of the sensing device, the sensing target determined according to the sensing data comprises the vehicle corresponding to the vehicle system, at the moment, the sensing target determined by the sensing data acquired by the server comprises the vehicle determined by the vehicle data acquired by the server, and if a real-time traffic model is constructed according to the sensing data and the vehicle data, model redundancy is easily caused.
In the prior art, usually, the distance between a sensing target determined by sensing data and a vehicle determined by vehicle data is calculated, whether the sensing target and the vehicle are the same target object is determined, and the sensing data is deduplicated after the sensing target and the vehicle are determined to be the same target object.
However, the accuracy of determining whether the perception target and the vehicle are the same target object according to the distance is not high, resulting in poor data deduplication effect.
Disclosure of Invention
The invention provides a data processing method, which aims to improve the accuracy of determining a sensing area corresponding to a sensing target, further improve the accuracy of determining the coincidence degree, further realize the accurate deduplication of sensing data based on vehicle data, and reduce the redundancy of real-time traffic data obtained by fusing the vehicle data and the sensing data.
In a first aspect, an embodiment of the present invention provides a data processing method, including:
determining position information, length information and width information of each sensing target according to a sensing data set acquired from sensing equipment, and determining distance information between each sensing target and the sensing equipment based on the position information of each sensing target;
determining position information optimization parameters of the perception targets according to the distance information;
optimizing the position information of each sensing target based on the position information optimization parameters of each sensing target to obtain optimized position information, and determining a sensing area of each sensing target according to the optimized position information, the length information and the width information of each sensing target;
determining an external region of a target vehicle according to a vehicle data set acquired from the target vehicle;
when the intersection of the external region of any target vehicle and the perception region of any perception target is determined, determining the degree of coincidence;
determining whether the perception target and the target vehicle correspond based on the degree of coincidence.
The technical scheme of the embodiment of the invention provides a data processing method, which comprises the following steps: determining position information, length information and width information of each sensing target according to a sensing data set acquired from sensing equipment, and determining distance information between each sensing target and the sensing equipment based on the position information of each sensing target; determining position information optimization parameters of the perception targets according to the distance information; optimizing the position information of each sensing target based on the position information optimization parameters of each sensing target to obtain optimized position information, and determining the sensing area of each sensing target according to the optimized position information, the length information and the width information of each sensing target; determining an external region of a target vehicle according to a vehicle data set acquired from the target vehicle; when the intersection of the external region of any target vehicle and the perception region of any perception target is determined, determining the degree of coincidence; determining whether the perception target and the target vehicle correspond based on the degree of coincidence. The technical scheme can firstly determine the position information of each perception target according to the perception data set acquired by the perception equipment, because the perception device is in a top view angle, the perception targets in the perception area have a shielding problem, so the position information of each perception target needs to be optimized, because the shielding conditions of different areas in the perception area are different, the far zone, the middle zone or the near zone of each perception object in the perception area can be determined according to the distance information between each perception object and the perception device, so as to determine the position information optimization parameters corresponding to the area as the position information optimization parameters corresponding to each perception target, and optimizing the position information of each sensing target based on the position information optimization parameter to obtain optimized position information, so as to realize the optimization of the position information of each sensing target, the obtained optimized position information is more accurate and is closer to the actual position information, and then the perception area of each perception object can be determined according to the optimized position information, the length information and the width information of each perception object, of course, the determined perception area of each perception object is more accurate, after determining the circumscribing area of the target vehicle from the vehicle data set acquired from the target vehicle, it may be determined whether there is an intersection between each sensing region and each circumscribing region, and the degree of overlap is determined when it is determined that there is an intersection, determining whether the sensing target corresponds to the target vehicle according to the coincidence degree, and after determining whether the sensing target corresponds to the target vehicle, and deleting the perception data corresponding to the perception target from the perception data set, and realizing data deduplication of the perception data based on the coincidence degree, so that the accuracy of data deduplication is improved, and the redundancy of real-time traffic data obtained by fusing vehicle data and the perception data is further reduced.
Further, determining location information optimization parameters of each perception target according to the distance information includes:
and determining the region information to which the perception target belongs according to the distance information, and determining the position information optimization parameter of the perception target according to the region information.
Further, when it is determined that the intersection exists between the circumscribed area of any target vehicle and the perception area of any perception target, determining the degree of coincidence comprises:
when the intersection of the external region of any target vehicle and the sensing region of any sensing target is determined, determining the intersection area; and determining the coincidence degree according to the intersection area and the sensing area corresponding to the sensing area or the circumscribed area corresponding to the circumscribed area.
Further, still include:
and determining the optimal parameter of the coincidence degree of each perception target according to the distance information.
Further, determining whether the perception target and the target vehicle correspond based on the degree of coincidence comprises:
if the coincidence degree is larger than a first preset threshold value, determining that the perception target corresponds to the target vehicle; if the coincidence degree is smaller than or equal to the first preset threshold and larger than a second preset threshold, updating the position information, the length information and the width information of the perception target based on the previous perception data and/or the next perception data in the perception data set to obtain first target perception data; determining the optimized coincidence degree of a first sensing area corresponding to the first target sensing data and an external area of the target vehicle according to the coincidence degree optimized parameter; and if the optimized coincidence degree is greater than the first preset threshold value, determining that the perception target corresponds to the target vehicle.
Further, determining the optimized overlapping degree of the first sensing area corresponding to the first target sensing data and the circumscribed area of the target vehicle according to the overlapping degree optimization parameter includes:
determining a substantial degree of coincidence of the first perception region and the circumscribing region; and determining the optimized coincidence degree according to the coincidence degree optimization parameter and the basic coincidence degree.
Further, determining a substantial degree of coincidence between the first perception region and the circumscribing region includes:
determining the intersection area of the first sensing area and the circumscribed area, and determining the basic coincidence degree according to the intersection area and the sensing area or the circumscribed area.
In a second aspect, an embodiment of the present invention further provides a data processing apparatus, including:
the distance determining module is used for determining the position information, the length information and the width information of each perception target according to a perception data set acquired from a perception device, and determining the distance information between each perception target and the perception device based on the position information of each perception target;
the first optimization parameter determining module is used for determining position information optimization parameters of the sensing targets according to the distance information;
a sensing region determining module, configured to optimize the position information of each sensing target based on the position information optimization parameter of each sensing target to obtain optimized position information, and determine a sensing region of each sensing target according to the optimized position information, the length information, and the width information of each sensing target;
the external region determining module is used for determining an external region of a target vehicle according to a vehicle data set acquired from the target vehicle;
the coincidence degree determining module is used for determining the coincidence degree when the intersection of the circumscribed area of any target vehicle and the sensing area of any sensing target is determined;
and the execution module is used for determining whether the perception target corresponds to the target vehicle or not based on the coincidence degree.
In a third aspect, an embodiment of the present invention further provides a server, where the server includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the data processing method of any one of the first aspects.
In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer-executable instructions for performing the data processing method according to any one of the first aspect when executed by a computer processor.
In a fifth aspect, the present application provides a computer program product comprising computer instructions which, when run on a computer, cause the computer to perform the data processing method as provided in the first aspect.
It should be noted that all or part of the computer instructions may be stored on the computer readable storage medium. The computer-readable storage medium may be packaged with a processor of a data processing apparatus, or may be packaged separately from the processor of the data processing apparatus, which is not limited in this application.
For the descriptions of the second, third, fourth and fifth aspects in this application, reference may be made to the detailed description of the first aspect; in addition, for the beneficial effects described in the second aspect, the third aspect, the fourth aspect and the fifth aspect, reference may be made to the beneficial effect analysis of the first aspect, and details are not repeated here.
In the present application, the names of the above-mentioned data processing apparatuses do not limit the devices or functional modules themselves, and in actual implementation, the devices or functional modules may appear by other names. As long as the functions of the respective devices or functional modules are similar to those of the present application, they fall within the scope of the claims of the present application and their equivalents.
These and other aspects of the present application will be more readily apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a data processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a sensing device and a sensing area corresponding to the sensing device in a data processing method according to an embodiment of the present invention;
FIG. 3 is a flow chart of another data processing method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of determining interpolation time in another data processing method according to an embodiment of the present invention;
fig. 5 is a schematic diagram of interpolation data corresponding to interpolation time determined in another data processing method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a first vehicle data set and a perception data set being time aligned based on isochronous pulses in another data processing method according to an embodiment of the present invention;
fig. 7 is a schematic diagram illustrating determination of optimized location information corresponding to a sensing target in another data processing method according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
The term "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second" and the like in the description and drawings of the present application are used for distinguishing different objects or for distinguishing different processes for the same object, and are not used for describing a specific order of the objects.
Furthermore, the terms "including" and "having," and any variations thereof, as referred to in the description of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but could have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like. In addition, the embodiments and features of the embodiments in the present invention may be combined with each other without conflict.
It should be noted that in the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" or "such as" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the present application, the meaning of "a plurality" means two or more unless otherwise specified.
The digital twin is a simulation process integrating multidisciplinary, multi-physical quantity, multi-scale and multi-probability by fully utilizing data such as a physical model, sensor updating, operation history and the like, and mapping is completed in a virtual space, so that the full life cycle process of corresponding entity equipment is reflected. If a real-time traffic model needs to be established, real-time traffic data needs to be acquired. In the intelligently modified road, sensing data can be acquired based on sensing equipment (such as a camera, a millimeter wave radar, a laser radar and the like) deployed on a lamp post and other similar rods, and information such as the position, the speed, the direction and the size of a sensing target can be determined through the sensing data; of course, it is also possible to acquire vehicle data based on the vehicle system and determine information such as the position, speed, direction, size, etc. of the vehicle from the vehicle data. And then, the perception data and the vehicle data are fused by utilizing an artificial intelligence and deep learning algorithm, specifically, the real-time traffic data can be determined according to the perception data and the vehicle data, the real-time traffic data is digitized and target-individualized, the information such as the position, the speed, the direction, the size and the like of each target with high-precision positioning is identified, and the real-time traffic model is constructed according to the perception data and the vehicle data by using a digital twin technology.
As described above, when the sensing target determined by the sensing data coincides with the target vehicle represented by the vehicle data, the algorithm may be used to perform deduplication, and finally the fused one-to-one real-time traffic stream data is output. In the prior art, the accuracy of determining whether a perception target and a target vehicle are the same target object based on a distance calculation method is not high, so that the data deduplication effect is poor.
Therefore, the data processing method can accurately determine the sensing area, further determine more accurate coincidence degree according to the sensing area and the external area, and accurately determine whether the sensing target and the target vehicle are the same target object. When the perception target and the target vehicle are determined to be the same target object, data deduplication is performed on perception data based on vehicle data, the data deduplication effect is optimized, and the efficiency of constructing the digital twin model is further improved.
The data processing method proposed in the present application will be described in detail below with reference to the drawings and embodiments.
Fig. 1 is a flowchart of a data processing method according to an embodiment of the present invention, where the embodiment is applicable to a situation where a data deduplication effect needs to be improved, the method may be executed by a data processing apparatus, as shown in fig. 1, and specifically includes the following steps:
step 110, determining position information, length information and width information of each sensing target according to a sensing data set acquired from a sensing device, and determining distance information between each sensing target and the sensing device based on the position information of each sensing target.
Because the sensing equipment is in an overlooking visual angle, each sensing target in a sensing area corresponding to the sensing equipment has a shielding problem, and the distance information between the sensing target and the sensing equipment influences the shielding condition, so that the accuracy of the position information of the sensing target determined according to the sensing data is influenced, and therefore, the position information of the sensing target determined according to the sensing data can be optimized based on the distance information between the sensing target and the sensing equipment.
Specifically, first, the position information (x, y), the length information, and the width information of each sensing target may be determined based on the sensing data set acquired from the sensing device, and then the distance information L between each sensing target and the sensing device may be determined according to the position information of each sensing target and the position information of the sensing device. Where x represents the longitude of the perceptual target and y represents the latitude of the perceptual target.
In the embodiment of the invention, after the position information of each sensing target is determined according to the sensing data set, the distance information between each sensing target and the sensing equipment can be determined according to the position information of each sensing target and the position information of the sensing equipment, so that a data base is provided for optimizing the position information of each sensing target.
And 120, determining position information optimization parameters of the perception targets according to the distance information.
Fig. 2 is a schematic diagram of a sensing device and a sensing region corresponding to the sensing device in a data processing method according to an embodiment of the present invention, as shown in fig. 2, a blind region PR exists in a detection direction corresponding to the sensing device, and the sensing region has a length CL and a width CW. The values of the length L1 of the blind area PR, the length CL of the sensing area and the width CW of the sensing area may be calibrated after the sensing device is installed. The sensing region can be divided into three regions from far to near according to the detection direction, which are a far region RR, a middle region MR and a near region NR, respectively, where the length of the far region RR accounts for 10% of the total length of the sensing region, the length of the middle region MR accounts for 20% of the total length of the sensing region, and the length of the near region NR accounts for 70% of the total length of the sensing region. That is, after determining the distance information L between the perception target and the perception device, if 25m < L < 70% cl, it can be determined that the perception target is located in the near zone NR; if 70% CL ≦ L < 90% CL, it may be determined that the perception target is located in the middle region MR; if 90% CL ≦ L < 100% CL, it may be determined that the perception target is located in the far region RR.
Specifically, after determining the distance information L between the sensing target and the sensing device, the distance information L, 25m, 70 cl, 90 cl, 100 cl may be compared to determine whether the sensing target is located in the far zone PR, the middle zone MR, or the near zone NR, and then the location information optimization parameter AMPLIFY _ raw corresponding to the region information may be determined as the location information optimization parameter AMPLIFY _ raw corresponding to the sensing target according to the region information to which the sensing target belongs.
In practical application, the location information optimization parameter AMPLIFY _ raw _ RR =145% corresponding to the far region RR, the location information optimization parameter AMPLIFY _ raw _ MR =125% corresponding to the middle region MR, and the location information optimization parameter AMPLIFY _ raw _ NR =100% corresponding to the near region NR.
In the embodiment of the invention, after the distance information between the sensing target and the sensing equipment is determined, the region information to which the sensing target belongs can be determined according to the distance information, and further the position information optimization parameter corresponding to the sensing target can be determined according to the position information optimization parameter corresponding to the region information.
Step 130, optimizing the position information of each sensing target based on the position information optimization parameter of each sensing target to obtain optimized position information, and determining a sensing area of each sensing target according to the optimized position information, the length information and the width information of each sensing target.
Because the sensing device is in a top view angle, the sensing targets in the far region RR, the middle region MR and the near region NR all have a shielding problem, and particularly, the middle region MR and the far region RR need to adjust coordinates of positioning points in sensing results in a direction opposite to the movement direction of the sensing targets.
Specifically, after determining the location information optimization parameter AMPLIFY _ raw corresponding to the sensing target, the location information (x, y) of the sensing target may be optimized according to the location information optimization parameter AMPLIFY _ raw to obtain the optimized location information (x 1, y 1). Where x1 represents the optimized longitude of the perceptual target and y1 represents the optimized latitude of the perceptual target. Specifically, based on the optimization of (x, y) based on AMPLIFY _ raw, it can be determined that x1= x- (cos (h) × sqrt (x × x + y) × AMPLIFY _ raw), and y1= y- (sin (h) × sqrt (x × x + y) × AMPLIFY _ raw). Of course, if the angle is not in the first quadrant, the angle value needs to be converted.
And then, the coordinates of four vertexes of the sensing area corresponding to the sensing target can be determined according to the optimized position information (x 1, y 1), the length information and the width information, and then the sensing area of the sensing target can be determined according to the coordinates of the four vertexes.
In the embodiment of the invention, the position information of the perception target is optimized based on the position information optimization parameter, the obtained optimized position information is more accurate, and the perception area constructed according to the more accurate optimized position information is more accurate.
Step 140, determining an external region of the target vehicle according to a vehicle data set acquired from the target vehicle.
Specifically, the position information, the length information, and the width information of the target vehicle may be determined according to a vehicle data set acquired from the target vehicle, and the coordinates of four vertices of the target vehicle may be determined according to the position information, the length information, and the width information of the target vehicle, and then the circumscribed area of the target vehicle may be determined according to the coordinates of the four vertices.
It should be noted that the vehicle data set and the perception data set do not generally match because the time interval for acquiring the vehicle data set from the target vehicle does not coincide with the time interval for acquiring the perception data set from the perception device. Therefore, in order to determine whether an intersection exists between the circumscribed area of the target vehicle and the sensing area of the sensing target, before the circumscribed area and the sensing area are determined, interpolation processing needs to be performed on the vehicle data set, and then the vehicle data set and the sensing data set are matched to determine target vehicle data and target sensing data which are matched in time, the circumscribed area is determined based on the target vehicle data, and the sensing area is determined based on the target sensing data.
In the embodiment of the invention, the vehicle data set is calculated to construct the external region corresponding to the target vehicle corresponding to the vehicle data set, so that the construction of the external region is realized.
And 150, when the intersection of the circumscribed area of any target vehicle and the perception area of any perception target is determined, determining the degree of coincidence.
Specifically, it may be determined whether there is an intersection between each sensing region and each circumscribed region according to a coordinate operation and a set operation, and an intersection area may be calculated when it is determined that there is an intersection. On one hand, after the sensing area of the sensing area is determined, the area coincidence rate of the intersection area and the sensing area can be further determined, and the coincidence degree of the circumscribed area and the sensing area can be determined according to the area coincidence rate; on the other hand, after the circumscribed area of the circumscribed area is determined, the area coincidence rate of the intersection area and the circumscribed area can be further determined, and the coincidence degree of the circumscribed area and the sensing area can be determined according to the area coincidence rate; in another aspect, the degree of coincidence between the circumscribed region and the sensing region may be determined according to the area of intersection; in yet another aspect, the degree of coincidence of the circumscribed area and the sensing area may be determined from the intersection area and the area coincidence.
In the embodiment of the invention, after the sensing area and the circumscribed area are determined to have the intersection, the intersection area can be determined, the area coincidence rate is determined according to the intersection area and the sensing area corresponding to the sensing area or the circumscribed area corresponding to the circumscribed area, the coincidence degree of the circumscribed area and the sensing area is further determined according to the intersection area and/or the area coincidence rate, and the coincidence degree can provide a data basis for determining whether the sensing target corresponds to the target vehicle.
And step 160, determining whether the perception target corresponds to the target vehicle or not based on the coincidence degree.
Because far zone RR, middle zone MR and near zone NR in the sensing region have shielding problems of different degrees, different first preset thresholds need to be set for the far zone RR, the middle zone MR and the near zone NR so as to determine whether a sensing target corresponds to a target vehicle based on the first preset thresholds, and the judgment accuracy is improved. Of course, in practical applications, the same first preset threshold may be used for the far region RR, the middle region MR and the near region NR.
Specifically, when the perception target is in the far zone RR, comparing the coincidence degree with a first preset threshold corresponding to the far zone RR, and if the coincidence degree is greater than the first preset threshold corresponding to the far zone RR, determining that the perception target corresponds to the target vehicle; when the perception target is in the middle region MR, comparing the coincidence degree with a first preset threshold corresponding to the middle region MR, and if the coincidence degree is greater than the first preset threshold corresponding to the middle region MR, determining that the perception target corresponds to the target vehicle; and when the perception target is in the near zone NR, comparing the coincidence degree with a first preset threshold corresponding to the near zone NR, and if the coincidence degree is greater than the first preset threshold corresponding to the near zone NR, determining that the perception target corresponds to the target vehicle. After the sensing target is determined to correspond to the target vehicle, the sensing data corresponding to the sensing target can be deleted from the sensing data set, and therefore the de-duplication of the sensing data is achieved.
Of course, if the coincidence degree is smaller than the first preset threshold and larger than the second preset threshold, it is necessary to further determine whether the perception target corresponds to the target vehicle; and if the coincidence degree is smaller than a second preset threshold value, determining that the perception target does not correspond to the target vehicle.
In the embodiment of the invention, whether the perception target corresponds to the target vehicle or not can be determined according to the overlapping degree, and the perception target and the target vehicle can be determined to be the same target when the perception target corresponds to the target vehicle, so that the perception data corresponding to the perception target can be deleted in the perception data set, and the duplication elimination of the perception data can be realized.
The data processing method provided by the embodiment of the invention comprises the following steps: determining position information, length information and width information of each sensing target according to a sensing data set acquired from sensing equipment, and determining distance information between each sensing target and the sensing equipment based on the position information of each sensing target; determining position information optimization parameters of the perception targets according to the distance information; optimizing the position information of each sensing target based on the position information optimization parameters of each sensing target to obtain optimized position information, and determining a sensing area of each sensing target according to the optimized position information, the length information and the width information of each sensing target; determining an external region of a target vehicle according to a vehicle data set acquired from the target vehicle; when the intersection of the external region of any target vehicle and the perception region of any perception target is determined, determining the degree of coincidence; determining whether the perception target and the target vehicle correspond based on the degree of coincidence. The technical scheme can firstly determine the position information of each perception target according to the perception data set acquired by the perception equipment, because the perception device is in a top view angle, the perception targets in the perception area have a shielding problem, so the position information of each perception target needs to be optimized, because the shielding conditions of different areas in the perception area are different, the far zone, the middle zone or the near zone of each perception object in the perception area can be determined according to the distance information between each perception object and the perception equipment, so as to determine the position information optimization parameters corresponding to the area as the position information optimization parameters corresponding to each perception target, and optimizing the position information of each sensing target based on the position information optimization parameter to obtain optimized position information, so as to realize the optimization of the position information of each sensing target, the obtained optimized position information is more accurate and is closer to the actual position information, and then the perception area of each perception object can be determined according to the optimized position information, the length information and the width information of each perception object, of course, the determined perception area of each perception object is more accurate, after determining the circumscribing area of the target vehicle from the vehicle data set acquired from the target vehicle, it may be determined whether there is an intersection between each sensing region and each circumscribing region, and the degree of overlap is determined when it is determined that there is an intersection, determining whether the sensing target corresponds to the target vehicle according to the degree of coincidence, and after determining whether the sensing target corresponds to the target vehicle, and deleting the perception data corresponding to the perception target from the perception data set, and realizing data deduplication of the perception data based on the coincidence degree, so that the accuracy of data deduplication is improved, and the redundancy of real-time traffic data obtained by fusing vehicle data and the perception data is further reduced.
Fig. 3 is a flowchart of another data processing method according to an embodiment of the present invention, which is embodied on the basis of the foregoing embodiment. As shown in fig. 3, in this embodiment, the method may further include:
and 310, acquiring a vehicle data set corresponding to the target vehicle from the target vehicle, and acquiring a perception data set from the perception device.
The vehicle data set includes a plurality of sets of vehicle data of a plurality of target vehicles, and the vehicle data may include information of positions, speeds, directions, sizes, and the like of the target vehicles. Perception equipment can be for setting up camera, millimeter wave radar, laser radar etc. on similar member such as lamp pole, and perception equipment can send perception data of all traffic participants in the perception region, perception result promptly to the server. The perception data set comprises multiple groups of perception data, and the perception data can comprise information of positions, speeds, directions, sizes and the like of multiple perception targets in a perception range.
The server may obtain a vehicle data set corresponding to the target vehicle from the target vehicle based on a first time interval, and may obtain a perception data set from the perception device based on a second time interval. Since the target vehicle does not transmit the vehicle data set to the server based on the first time interval strictly, and the perception device does not transmit the perception data set to the server based on the second time interval strictly, the time interval for the server to acquire the vehicle data set and the perception data set is not always strictly.
Typically, the first predetermined time interval is greater than the second predetermined time interval, the first time interval may be 200ms and the second time interval may be 50ms. Since the first time interval is a vehicle reporting time interval, there is a large uncertainty about its specific transmission time. The second time interval may be a truncation time of successive perceptual results and is therefore relatively more deterministic.
Specifically, when a target vehicle of the access server enters a sensing area corresponding to sensing equipment of the access server, the server acquires a vehicle data set vehsyncquee i = { vehpos1, vehpos 2., vehpos m } from the target vehicle based on a first preset time interval, i represents an ith target vehicle of the access server, m represents the number of groups of vehicle data acquired from the ith target vehicle, and the numerical value of m is related to the first preset time interval and the continuous uploading time. In practical application, the server may obtain a vehicle data set vehSyncQue = { vehSyncQue 1, vehSyncQue 2, …, vehSyncQue n }, where n represents the number of target vehicles accessing the server, from all target vehicles accessing the server. The server may also obtain, from the sensing device, a sensing data set perceptSyncQue = { perceptObjList 1, perceptObjList 2,. And perceptObjList p }, based on a second preset time interval, where p represents the number of groups of sensing data obtained from the sensing device, and the value of p is related to the second preset time interval and the persistent upload time. The perceptObjListn p may be picture data taken at one point in time with a camera, in which there may be a plurality of target vehicles.
vehpos = { vid, x, y, h, s, tm, vl, vw }, where vid is the unique number of the target vehicle, x is longitude, y is latitude (wgs coordinate system), h is heading angle, s is speed (in m/s), tm is time (typically a transmission timestamp), vl is length (in m), and vw is width (in m). perceptObjList = { poleid, tm, { po1, po2,... Poi }, where poleid is the rod id where the sensing device is located, tm is the point in time (typically the transmission timestamp), poi is the data of the ith sensing target, poi = perceptObj = { pid, x, y, h, s, tm, szl, szw }, pid is the unique id of the sensing target, and remains the only unchanged within the sensing region, x is longitude, y is latitude (wgs coordinate system), h is the heading angle (counterclockwise from the horizontal direction), s is the speed (in units of m/s), tm is the time (typically the transmission timestamp), szw is the sensing width (in units of m), szl is the sensing length (in units of m).
For the vehicle data reported in real time, each group of vehicle data is received and added to the tail part of vehSyncQue i corresponding to the vehicle identification according to the vehicle identification of the target vehicle; and for the perception data reported in real time, each group of perception data is received and is added to the tail of the perceptsyncQue.
The server can be accessed with a plurality of target vehicles and a plurality of sensing devices, in the application, the vehicle data set can include a plurality of groups of vehicle data sets corresponding to the plurality of target vehicles, and the sensing data set can include a group of sensing data sets corresponding to one sensing device.
In addition, since the time interval for acquiring the vehicle data set from the target vehicle data is different from the time interval for acquiring the perception data set from the perception device, mismatch of the vehicle data set and the perception data set is caused, and the accuracy of the perception data is lower than that of the vehicle data.
In the embodiment of the invention, the acquisition of the vehicle data and the perception data is realized.
And 311, determining interpolation time of the vehicle data set according to a first time interval corresponding to the vehicle data set and a second time interval corresponding to the perception data set, and determining interpolation data corresponding to the interpolation time according to each piece of vehicle data contained in the vehicle data set.
Because the time intervals for acquiring the vehicle data set and the perception data set by the server are not consistent, and the first preset time interval for acquiring the vehicle data set is greater than the second preset time interval for acquiring the perception data set, in order to match the time of the vehicle data set and the perception data set, the data set with a larger time interval needs to be interpolated, that is, the vehicle data set needs to be interpolated, so that the time interval of the vehicle data set is close to the time interval of the perception data set.
In one embodiment, determining an interpolation time for the vehicle data set based on a first time interval corresponding to the vehicle data set and a second time interval corresponding to the perception data set includes:
determining an interpolation time interval according to the first time interval and the second time interval; and determining the interpolation time between the vehicle time and the previous vehicle time according to the time difference and the interpolation time interval of the previous vehicle time corresponding to the vehicle time and the vehicle time, and determining the interpolation time between the vehicle time and the next vehicle time corresponding to the vehicle time according to the interpolation time and the interpolation time interval which are closest to the vehicle time and between the vehicle time and the previous vehicle time.
Specifically, when the first time interval is 200ms and the second time interval is 50ms, it may be determined that the interpolation time interval is 50ms. For the vehicle time t (i) contained in any vehicle data vehpos i in vehSyncQue i, the last vehicle time t (i-1) corresponding to the vehicle data t (i) and the next vehicle time t (i + 1) corresponding to t (i) can be determined. Next, the time difference between t (i) and t (i-1) may be calculated dti = t (i) -t (i-1), and the number of interpolation times between t (i) and t (i-1) n = dti% STM, which may be a second preset time interval 50ms, n being the number of time intervals equal to 50ms between t (i) and t (i-1) may be calculated. Fig. 4 is a schematic diagram of determining an interpolation time in another data processing method according to an embodiment of the present invention, and as shown in fig. 4, an interpolation time tl = t (i) -n × STM = t (i) -4 × 50 between n =4,t (i) and t (i-1) that is closest to t (i). Since the time interval of the first vehicle data set obtained by interpolating the vehicle data sets is 50ms, the sum of the interpolation times tl closest to t (i) between t (i) and t (i-1) and the interpolation time tf closest to t (i) between t (i) and t (i + 1) can be determined to be 50ms, that is, tl + tf =50, and thus tf =50-tl can be determined. After tf is determined, the interpolation time between t (i) and t (i + 1) may be determined in turn.
In one embodiment, the vehicle data includes position information of the target vehicle at the vehicle time, and the interpolated data includes position information of the target vehicle at the interpolated time, and accordingly, determining the interpolated data corresponding to each interpolated time according to each vehicle data included in the vehicle data set includes:
determining an interpolation time interval according to the first time interval and the second time interval; determining position information corresponding to the target vehicle at each interpolation time according to position information, vehicle speed and the interpolation time interval corresponding to the target vehicle at each vehicle time, wherein the position information, the vehicle speed and the interpolation time interval are contained in each vehicle data; and determining longitude and latitude contained in position information corresponding to each interpolation time of the target vehicle as the interpolation data corresponding to each interpolation time.
When the position information and the vehicle speed of the target vehicle at the vehicle time are known, the position information of the target vehicle at any interpolation time can be determined, and further the position information of the target vehicle at the interpolation time can be determined as interpolation data corresponding to the interpolation time.
Specifically, fig. 5 is a schematic diagram of determining interpolation data corresponding to an interpolation time in another data processing method provided by the embodiment of the present invention, and as shown in fig. 5, any vehicle data vehpos i in vehSyncQue i may include a vehicle time t (i), and a vehicle position (xi, yi) and a vehicle speed vp (i) of a target vehicle. When the vehicle position corresponding to t (i) is known as (xi, yi) and the vehicle speed is known as vp (i), an interpolation time between t (i) and t (i + 1) which is distant from t (i) by tf, that is, the vehicle position corresponding to a first interpolation time between t (i) and t (i + 1) is known as (xi + vi × tf, yi + vi × tf), and based on the same calculation method, the vehicle position corresponding to a second interpolation time between t (i) and t (i + 1) is known as (xi + vi × tf + 50), yi + vi (tf + 50)), and then the interpolation data corresponding to all interpolation times can be determined.
In the embodiment of the invention, the interpolation time and the interpolation data of the vehicle data set are determined.
And step 312, obtaining the first vehicle data set corresponding to the target vehicle according to the interpolation time, the interpolation data corresponding to the interpolation time and the vehicle data set.
In one embodiment, step 312 may specifically include:
inserting the interpolated data into the vehicle data set according to the interpolation time; deleting the interpolation data after the fact that the interpolation time corresponding to any interpolation data is coincident with the vehicle time is determined; and after the time interval between any one vehicle time and the interpolation time corresponding to two adjacent interpolation data is determined to be smaller than the second time interval, deleting the vehicle data corresponding to the vehicle time to obtain the first vehicle data set corresponding to the target vehicle.
Specifically, the vehicle data set includes vehicle data arranged in chronological order, and therefore, the position of the interpolation data in the vehicle data set can be determined based on the interpolation time included in the interpolation data to insert the interpolation data into the vehicle data set. Of course, when interpolation data is inserted into the vehicle data set, if the interpolation time included in any interpolation data coincides with the vehicle time included in any vehicle data in the vehicle data set, the interpolation data is deleted; if the time interval between any vehicle time and the interpolation time contained in two adjacent interpolation data is smaller than the second preset time interval, as shown in fig. 4, the time interval tl between t (i) and t (i-1) and the time interval tf between t (i) and t (i + 1) are both smaller than 50ms, the vehicle data corresponding to the vehicle time is deleted, and data redundancy is avoided. After the redundant interpolation data and/or vehicle data are/is deleted, a first vehicle data set vehsyncQue I = { vehpos1, vehpos 2.,. Vehpos N } corresponding to the target vehicle can be obtained.
In the embodiment of the invention, the vehicle data sets are supplemented based on the interpolation time and the interpolation data, and the first vehicle data set corresponding to the target vehicle is obtained, so that the time intervals of the first vehicle data set and the perception data set are kept consistent.
And 313, matching the first vehicle data set with the perception data set, and determining target perception data and target vehicle data according to a matching result.
In one embodiment, step 313 may specifically include:
determining a plurality of time points of equal time intervals, wherein the time intervals of the plurality of time points are smaller than the time intervals of the vehicle data set; for any time point, determining vehicle data corresponding to the closest vehicle time to the time point in the first vehicle data set, and determining perception data corresponding to the closest perception time to the time point in the perception data set; and determining vehicle data corresponding to the vehicle time closest to the time point as the target vehicle data, and determining perception data corresponding to the perception time closest to the time point as the target perception data.
In practical application, the pulse time corresponding to each pulse in the isochronous pulse can be determined; traversing each pulse time to determine a first difference value between any pulse time and a first vehicle time contained in each first vehicle data in the first vehicle data set, and a second difference value between sensing time contained in each sensing data in the sensing data set; and determining first vehicle data corresponding to the first vehicle time corresponding to the minimum first difference value as target vehicle data, determining perception data corresponding to the perception time corresponding to the minimum second difference value as target perception data, wherein the target perception data and the target vehicle data are aligned and matched perception data and vehicle data.
Specifically, an isochronous pulser may be first constructed, where the time interval for sending pulses by the isochronous pulser is 50ms, and the pulse time corresponding to each pulse is set to pti, where i represents the number of pulses. Fig. 6 is a schematic diagram of performing time alignment on a first vehicle data set and a sensing data set based on an isochronous pulse in another data processing method provided by an embodiment of the present invention, and as shown in fig. 6, after an isochronous pulser sends a pulse (pulse time is pti), the isochronous pulser may traverse a sensing data set percetsyncque and a first vehicle data set vehSyncQue I corresponding to each target vehicle accessing a server from the head, calculate a first difference ptdiff1 between vehicle data and pti included in each vehicle data in the first vehicle data set, and calculate a second difference ptdiff2 between sensing time and pti included in each sensing data in the sensing data set percetsyncque. If both the first difference and the second difference are smaller than 50ms, the first vehicle data vehpos a and the perception data perceptObjList b are placed in the matching set R = { { ptdiff1, vehpos a, ptdiff2, perceptObjList b },. And then, the first difference values and the second difference values in the matching set can be respectively compared, the first vehicle data corresponding to the first vehicle time corresponding to the minimum first difference value is determined as target vehicle data vehpos pti, and the sensing data corresponding to the sensing time corresponding to the minimum second difference value is determined as target sensing data perceptObjList pti.
In the embodiment of the invention, the first vehicle time and the sensing time with the minimum pulse time difference corresponding to any pulse are determined by traversing each pulse sent by the isochronous pulser, so that the first vehicle data corresponding to the first vehicle time can be determined as the target vehicle data, the sensing data corresponding to the sensing time is determined as the target sensing data, and the alignment matching of the sensing data and the vehicle data is realized.
Step 314, determining position information, length information and width information of each sensing target based on the target sensing data, and determining distance information between each sensing target and the sensing device based on the position information of each sensing target.
Specifically, the location information (X, Y) of the perceiving device may be pre-stored in the server. Each perception object in the object perception data perceptobj list is traversed, and distance information L between the perception object and the perception device is determined according to the position information (X, Y) of the perception object and the position information (X, Y) of the perception device. Specifically, (X, Y) and (X, Y) may be subjected to coordinate calculation to determine distance information L between the perception object and the perception device.
In the embodiment of the invention, after the position information of each perception target is determined according to the target perception data, the distance information between each perception target and the perception equipment can be determined according to the position information of each perception target and the position information of the perception equipment, so that a data basis is provided for optimizing the position information of each perception target.
And 315, determining position information optimization parameters and coincidence degree optimization parameters of the perception targets according to the distance information.
In one embodiment, step 315 may specifically include:
and determining the region information to which the perception target belongs according to the distance information, and determining the position information optimization parameter and the coincidence degree optimization parameter of the perception target according to the region information.
As shown in fig. 2, the sensing region can be divided into a far region RR, a middle region MR and a near region NR according to the detection direction from far to near, since the length of the far region RR accounts for 10% of the total length of the sensing region, the length of the middle region MR accounts for 20% of the total length of the sensing region, and the length of the near region NR accounts for 70% of the total length of the sensing region, after determining the distance information L between the sensing target and the sensing device, if 25m < L < 70%, the sensing target can be determined to be located in the near region NR; if 70%CL ≦ L < 90 CL, it may be determined that the perception target is located in the middle region MR; if 90% CL < L < 100% CL, it can be determined that the perception object is located in the far region RR, enabling determination of the region information to which the perception object belongs based on the distance information. Wherein, perceptObjList pti xx = { poleid, tm, { po1, po 2.. Po } }, poi = perceptObj = { pid, x, y, h, s, tm, szw, szl }. Further, the position information optimization parameter AMPLIFY raw and the coincidence optimization parameter μ of the sensing target may be determined according to the region information to which the sensing target belongs, for example, when the sensing target is in a far region, AMPLIFY _ raw _ RR =145%, μ =1.1; when the perception object is in the middle region, it may be determined that AMPLIFY _ raw _ MR =125%, μ =1.05; when the perception object is in the near zone, it may be determined that AMPLIFY _ RATO _ NR =100%, μ =1.02.
In the embodiment of the invention, after the distance information between the sensing target and the sensing equipment is determined, the region information to which the sensing target belongs can be determined according to the distance information, and further, the position information optimization parameter and the coincidence degree optimization parameter corresponding to the sensing target can be determined according to the position information optimization parameter and the coincidence degree optimization parameter corresponding to the region information.
Step 316, optimizing the position information of each sensing target based on the position information optimization parameter of each sensing target to obtain optimized position information, and determining a sensing area of each sensing target according to the optimized position information, the length information and the width information of each sensing target.
Fig. 7 is a schematic diagram of determining optimized location information corresponding to a sensing target in another data processing method provided in the embodiment of the present invention, and as shown in fig. 7, after determining a location information optimization parameter AMPLIFY _ raw corresponding to the sensing target, location information (x, y) corresponding to the sensing target may be optimized based on the location information optimization parameter, so as to obtain optimized location information (x 1, y 1) (a tuning point) corresponding to the sensing target. Wherein x represents the longitude of the perception target, y represents the latitude of the perception target, x1 represents the optimized longitude of the perception target, and y1 represents the optimized latitude of the perception target. Specifically, an optimized longitude x1= x- (cos (h) × sqrt (x × x + y) × AMPLIFY _ rat) and an optimized latitude y1= y- (sin (h) × sqrt (x × x + y) × y) AMPLIFY _ rat) may be determined. Of course, if the angle is not in the first quadrant, the angle value is switched.
In one embodiment, determining the sensing area of each sensing target according to the optimized position information, the length information and the width information of each sensing target includes:
determining optimized central position information of each sensing target according to optimized longitude and optimized latitude contained in the optimized position information of each sensing target; and determining the perception area of each perception target according to the optimization center position information, the length information and the width information of each perception target.
Specifically, after determining the optimized position information (x 1, y 1) corresponding to each sensing target, a sensing area of each sensing target may be constructed for each sensing target in the perceptObjList pti, that is, coordinates of four vertices of the sensing area corresponding to the sensing target may be determined according to the optimized position information (x 1, y 1), the length information szw, and the width information szl, that is, a sensing rectangular frame poRect i = { Li, ti, ri, bi } corresponding to each sensing target is constructed. When the longitude and latitude coordinate and length meter conversion formula is set as degee _2meter =108000, it can be determined that: l = x1- (szw/degee _2 METER)/2, T = y1+ (szl/degee _2 METER)/2, R = x1+ (szw/degee _2 METER)/2, B = y1- (szl/degee _2 METER)/2.
It should be noted that one target perception data perceptObjList pti may determine the perception areas corresponding to a plurality of perception targets. That is, when the perception data is a photographed image, the perception regions corresponding to a plurality of perception targets (perception vehicles) may be determined from one image.
In the embodiment of the invention, the position information of the perception target is optimized based on the position information optimization parameter, the obtained optimized position information is more accurate, and the perception area constructed according to the more accurate optimized position information is more accurate.
And step 317, determining an external area of the target vehicle according to the target vehicle data.
In one embodiment, step 317 may specifically include:
determining position information, length information, and width information of the target vehicle based on the target vehicle data; determining longitude and latitude contained in the position information of the target vehicle as center position information of the target vehicle; determining an outer-connection area of the target vehicle according to the center position information, the length information and the width information of the target vehicle.
Specifically, an external region of the target vehicle may be constructed, that is, an external rectangular frame vehRect = { VL, VT, VR, VB } corresponding to the target vehicle is constructed. As described above, when the longitude and latitude coordinate and length meter conversion formula can be set to degee _2meter =108000, it can be determined that: VL = x1- (vw/degee _2 METER)/2, VT = y1+ (VL/degee _2 METER)/2, VR = x1+ (vw/degee _2 METER)/2, VB = y1- (VL/degee _2 METER)/2.
Of course, the above calculation process is a calculation on a GIS coordinate system. In the calculation process, the perception target and the target vehicle can be converted to a GIS coordinate system.
In the embodiment of the invention, the external region corresponding to the target vehicle data is constructed by calculating the target vehicle data, so that the construction of the external region is realized.
And step 318, when the intersection of the circumscribed area of any target vehicle and the perception area of any perception target is determined, determining the coincidence degree.
The coincidence degree can be the area coincidence rate of the intersection area and the sensing area or the external area.
In one embodiment, step 318 may specifically include:
when the intersection of the circumscribed area of any target vehicle and the perception area of any perception target is determined, determining the intersection area; and determining the coincidence degree according to the intersection area and the sensing area corresponding to the sensing area.
In another embodiment, step 318 may specifically include:
when the intersection of the external region of any target vehicle and the sensing region of any sensing target is determined, determining the intersection area; and determining the coincidence degree according to the intersection area and the circumscribed area corresponding to the circumscribed area.
Specifically, the intersection of poRect i corresponding to each perception object and vehRect corresponding to the target vehicle can be calculated, the intersection can be calculated as povehRect = { UL, UT, UR, UB } according to the geographic coordinates and set operation, and then the intersection area povehAlea = (UR-UL) ((UT-UB)) can be determined.
On one hand, after the sensing area of the sensing area is determined, the area coincidence rate of the intersection area and the sensing area can be further determined, and the coincidence degree of the circumscribed area and the sensing area can be determined according to the area coincidence rate. Specifically, after the sensing area poArea = (R-L) × (T-B) corresponding to the sensing target is determined, the area overlapping rate povehcoid of the intersection area and the sensing area poArea = povehArea/poArea can be determined from the intersection area povehArea and the sensing area poArea.
On the other hand, after the circumscribed area of the circumscribed area is determined, the area coincidence rate of the intersection area and the circumscribed area can be further determined, and the coincidence degree of the circumscribed area and the sensing area can be determined according to the area coincidence rate. Specifically, after the circumscribed area vehArea = (VR-VL) × (VT-VB) corresponding to the target vehicle is determined, the area overlapping rate povehcin = povehArea/vehArea of the intersection area and the circumscribed area vehArea may be determined from the intersection area povehArea and the circumscribed area vehArea.
In the embodiment of the invention, after the sensing area and the circumscribed area are determined to have intersection, the intersection area can be determined, the area coincidence rate is determined according to the intersection area and the sensing area corresponding to the sensing area or the circumscribed area corresponding to the circumscribed area, the coincidence degree of the circumscribed area and the sensing area is further determined according to the area coincidence rate, and the coincidence degree can provide a data basis for determining whether the sensing target corresponds to the target vehicle.
Step 319, determining whether the perception target and the target vehicle correspond based on the degree of coincidence.
In one embodiment, step 319 may specifically include:
if the coincidence degree is larger than a first preset threshold value, determining that the perception target corresponds to the target vehicle; if the coincidence degree is smaller than or equal to the first preset threshold and larger than a second preset threshold, updating the position information, the length information and the width information of the perception target based on the previous perception data and/or the next perception data in the perception data set to obtain first target perception data; determining the optimized coincidence degree of a first sensing area corresponding to the first target sensing data and an external area of the target vehicle according to the coincidence degree optimized parameter; and if the optimized coincidence degree is greater than the first preset threshold value, determining that the perception target corresponds to the target vehicle.
In one embodiment, determining an optimized overlapping degree between a first sensing area corresponding to first target sensing data and an outer region of the target vehicle according to an overlapping degree optimization parameter includes:
determining a substantial degree of coincidence of the first perception region and the circumscribing region; and determining the optimized coincidence degree according to the coincidence degree optimization parameter and the basic coincidence degree.
Further, determining a substantial degree of coincidence between the first perception region and the circumscribing region includes:
determining the intersection area of the first sensing area and the circumscribed area, and determining the basic coincidence degree according to the intersection area and the sensing area corresponding to the first sensing area or the circumscribed area corresponding to the circumscribed area.
When the perception target is in the far zone, a first preset threshold value COIN _ raw =0.7 may be determined; when the perception object is in the middle area, a first preset threshold value COIN _ raw =0.8 may be determined; when the perception object is in the near zone, a first preset threshold value COIN _ raw =0.9 may be determined. The second preset threshold may be 0.6.
Specifically, if povehcin > COIN _ RATO, it is determined that the sensing target corresponds to the target vehicle, and at this time, sensing data corresponding to the sensing target may be deleted in the sensing data set, and an element set deRet = (pid, vid, povehcin) is recorded and added to the tail of the regression compensation queue; if 0.6 < povehCoin < COIN _ RATO, then the element set deRet = (pid, vid, povehCoin) is recorded and added to the tail of the regression compensation queue, and it needs to be further determined whether the perception target and the target vehicle are the same target object.
The target vehicle data is accurate because the vehicle is positioned with high accuracy. The reason that the coincidence degree is lower than the first preset threshold value comes from two aspects, namely that the detection algorithm is interfered by the external environment to generate false detection (for example, suddenly-generated light change, occlusion of objects such as leaves and the like) and that the target perception data and the target vehicle data are determined incorrectly.
For the reason one and the reason two, the position information, the length information and the width information of the perception target may be updated based on the previous perception data and/or the next perception data in the perception data set to obtain the first target perception data. For example, first target sensing data may be determined according to position information, length information, and width information included in previous sensing data in the sensing data set, and then after a first sensing region is determined according to the first target sensing data, an intersection area of the first sensing region and an external region is determined, and a basic coincidence degree, that is, a basic coincidence rate poCoin _ pre, is determined according to the intersection area and a sensing area corresponding to the first sensing region or an external region corresponding to the external region; the first target sensing data can be determined according to position information, length information and width information contained in the latter sensing data in the sensing data set, then the intersection area of the first sensing area and the external connection area is determined after the first sensing area is determined according to the first target sensing data, and the basic coincidence degree, namely the basic coincidence rate poCoin _ next, is determined according to the intersection area and the sensing area corresponding to the first sensing area or the external connection area corresponding to the external connection area.
Further, the basic coincidence rates poCoin _ pre and poCoin _ next can be optimized and compensated according to the coincidence rate optimization parameter μ determined in the foregoing step 315, where μ =1.1 when the sensing target is in the far zone; μ =1.05 when the perception object is in the middle zone; when the perception target is in the near zone, μ =1.02. Therefore, the optimal coincidence degree can be determined to be poCoin _ pre × μ according to the coincidence optimization parameter and the basic coincidence degree poCoin _ pre, and the optimal coincidence degree can be determined to be poCoin _ next × μ according to the coincidence optimization parameter and the basic coincidence degree poCoin _ next.
If poCoin _ pre × μ > COIN _ RATO or poCoin _ next × μ > COIN _ RATO, determining that the poCoin is a larger value of the poCoin _ pre × μ and the poCoin _ next × μ, and further determining that the poCoin > COIN _ RATO, namely determining that the perception target corresponds to the target vehicle; otherwise, determining poCoin to be 0, and further determining that poCoin is less than 0.6, that is, determining that the perception target does not correspond to the target vehicle.
In the embodiment of the invention, whether the perception target corresponds to the target vehicle can be determined according to the overlapping degree, and the perception target and the target vehicle can be determined to be the same target when the perception target corresponds to the target vehicle, so that the perception data corresponding to the perception target can be deleted in the perception data set, and the duplication elimination of the perception data is realized.
The data processing method provided by the embodiment of the invention comprises the following steps: acquiring a vehicle data set corresponding to a target vehicle from the target vehicle, and acquiring a perception data set from perception equipment; determining interpolation time of the vehicle data set according to a first time interval corresponding to the vehicle data set and a second time interval corresponding to the perception data set, and determining interpolation data corresponding to the interpolation time according to vehicle data contained in the vehicle data set; obtaining the first vehicle data set corresponding to the target vehicle according to the interpolation time, the interpolation data corresponding to the interpolation time and the vehicle data set; matching the first vehicle data set with the perception data set, and determining target perception data and target vehicle data according to a matching result; determining position information, length information and width information of each sensing target based on the target sensing data, and determining distance information between each sensing target and the sensing equipment based on the position information of each sensing target; determining position information optimization parameters and coincidence degree optimization parameters of the perception targets according to the distance information; optimizing the position information of each sensing target based on the position information optimization parameters of each sensing target to obtain optimized position information, and determining a sensing area of each sensing target according to the optimized position information, the length information and the width information of each sensing target; determining an external area of the target vehicle according to the target vehicle data; when the intersection of the external region of any target vehicle and the perception region of any perception target is determined, determining the degree of coincidence; determining whether the perception target and the target vehicle correspond based on the degree of coincidence. According to the technical scheme, a vehicle data set is obtained from a target vehicle, a perception data set is obtained from perception equipment, interpolation time of the vehicle data set is determined according to a first time interval corresponding to the vehicle data set and a second time interval corresponding to the perception data set, interpolation data corresponding to the interpolation time are further determined according to vehicle data contained in the vehicle data set, the vehicle data set is subjected to interpolation processing based on the interpolation time and the interpolation data, the time interval of the obtained first vehicle data set is consistent with the time interval of the perception data set, after the first vehicle data set and the perception data set are aligned and matched, corresponding target vehicle data and target perception data can be obtained, matching of the vehicle data and the perception data is achieved, further position information of each perception target can be determined according to the target vehicle data, and the perception equipment is a visual angle, the perception targets in the perception areas all have the problem of occlusion, so the position information of each perception target needs to be optimized, and because the occlusion conditions of different areas in the perception areas are different, the position information optimization parameters corresponding to the areas can be determined according to the distance information between each perception target and the perception equipment to be positioned in the far area, the middle area or the near area of the perception area, so that the position information optimization parameters corresponding to the areas are determined as the position information optimization parameters corresponding to each perception target, the position information of each perception target is optimized based on the position information optimization parameters to obtain the optimized position information, the optimization of the position information of each perception target is realized, the obtained optimized position information is more accurate and is closer to the actual position information, and the perception areas of each perception target can be determined according to the optimized position information, the length information and the width information of each perception target, the determined sensing areas of the sensing targets are more accurate, after the external area of the target vehicle is determined according to the data of the target vehicle, whether intersection exists between each sensing area and each external area can be determined, the coincidence degree is determined when the intersection exists, whether the sensing targets correspond to the target vehicle is determined according to the coincidence degree, after whether the sensing targets correspond to the target vehicle is determined, the sensing data corresponding to the sensing targets are deleted from the sensing data set, data deduplication of the sensing data is achieved based on the coincidence degree, accuracy of data deduplication is improved, and redundancy of real-time traffic data obtained by fusing the vehicle data and the sensing data is further reduced.
Fig. 8 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention, where the apparatus is suitable for a situation where data deduplication needs to be improved. The apparatus may be implemented by software and/or hardware and is typically integrated in a server.
As shown in fig. 8, the apparatus includes:
a distance determining module 810, configured to determine, according to a sensing data set obtained from a sensing device, position information, length information, and width information of each sensing target, and determine, based on the position information of each sensing target, distance information between each sensing target and the sensing device;
a first optimization parameter determining module 820, configured to determine, according to the distance information, a location information optimization parameter of each sensing target;
a sensing region determining module 830, configured to optimize the position information of each sensing target based on the position information optimization parameter of each sensing target to obtain optimized position information, and determine a sensing region of each sensing target according to the optimized position information, the length information, and the width information of each sensing target;
an circumscribed area determination module 840 for determining a circumscribed area of a target vehicle according to a vehicle data set obtained from the target vehicle;
the coincidence degree determining module 850 is used for determining the coincidence degree when determining that the intersection exists between the circumscribed area of any target vehicle and the sensing area of any sensing target;
an execution module 860 for determining whether the perception target and the target vehicle correspond based on the degree of coincidence.
The data processing apparatus provided in this embodiment determines, according to a sensing data set acquired from a sensing device, position information, length information, and width information of each sensing target, and determines, based on the position information of each sensing target, distance information between each sensing target and the sensing device; determining position information optimization parameters of the perception targets according to the distance information; optimizing the position information of each sensing target based on the position information optimization parameters of each sensing target to obtain optimized position information, and determining a sensing area of each sensing target according to the optimized position information, the length information and the width information of each sensing target; determining an external region of a target vehicle according to a vehicle data set acquired from the target vehicle; when the intersection of the external region of any target vehicle and the perception region of any perception target is determined, determining the degree of coincidence; determining whether the perception target and the target vehicle correspond based on the degree of coincidence. The technical scheme can firstly determine the position information of each perception target according to the perception data set acquired by the perception equipment, because the perception device is in a top view angle, the perception targets in the perception area have a shielding problem, so the position information of each perception target needs to be optimized, because the shielding conditions of different areas in the perception area are different, the far zone, the middle zone or the near zone of each perception object in the perception area can be determined according to the distance information between each perception object and the perception equipment, so as to determine the position information optimization parameters corresponding to the area as the position information optimization parameters corresponding to each perception target, and optimizing the position information of each sensing target based on the position information optimization parameter to obtain optimized position information, so as to realize the optimization of the position information of each sensing target, the obtained optimized position information is more accurate and is closer to the actual position information, and then the perception area of each perception target can be determined according to the optimized position information, the length information and the width information of each perception target, and of course, the determined perception area of each perception target is more accurate, after determining the circumscribing area of the target vehicle from the vehicle data set acquired from the target vehicle, it may be determined whether there is an intersection between each sensing region and each circumscribed region, and the degree of coincidence is determined when it is determined that there is an intersection, determining whether the sensing target corresponds to the target vehicle according to the degree of coincidence, and after determining whether the sensing target corresponds to the target vehicle, and deleting the perception data corresponding to the perception target from the perception data set, and realizing data deduplication of the perception data based on the coincidence degree, so that the accuracy of data deduplication is improved, and the redundancy of real-time traffic data obtained by fusing vehicle data and the perception data is further reduced.
On the basis of the foregoing embodiment, the first optimization parameter determining module 820 is specifically configured to:
and determining the region information to which the perception target belongs according to the distance information, and determining the position information optimization parameter of the perception target according to the region information.
On the basis of the foregoing embodiment, the coincidence determining module 850 is specifically configured to:
when the intersection of the external region of any target vehicle and the sensing region of any sensing target is determined, determining the intersection area; and determining the coincidence degree according to the intersection area and the sensing area corresponding to the sensing area or the circumscribed area corresponding to the circumscribed area.
On the basis of the above embodiment, the apparatus further includes:
and the second optimization parameter determination module is used for determining the optimization parameters of the coincidence degree of each perception target according to the distance information.
On the basis of the foregoing embodiment, the execution module 860 is specifically configured to:
if the coincidence degree is larger than a first preset threshold value, determining that the perception target corresponds to the target vehicle; if the coincidence degree is smaller than or equal to the first preset threshold and larger than a second preset threshold, updating the position information, the length information and the width information of the perception target based on the previous perception data and/or the next perception data in the perception data set to obtain first target perception data; determining the optimized coincidence degree of a first sensing area corresponding to the first target sensing data and an external area of the target vehicle according to the coincidence degree optimized parameter; and if the optimized coincidence degree is greater than the first preset threshold value, determining that the perception target corresponds to the target vehicle.
In one embodiment, determining an optimized overlapping degree between a first sensing area corresponding to first target sensing data and an outer region of the target vehicle according to an overlapping degree optimization parameter includes:
determining a substantial degree of coincidence of the first perception area and the circumscribing area; and determining the optimized coincidence degree according to the coincidence degree optimization parameter and the basic coincidence degree.
Further, determining a substantial degree of coincidence between the first perception region and the circumscribing region includes:
determining the intersection area of the first sensing area and the circumscribed area, and determining the basic coincidence degree according to the intersection area and the sensing area or the circumscribed area.
The data processing device provided by the embodiment of the invention can execute the data processing method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, in the embodiment of the data processing apparatus, the included units and modules are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
Fig. 9 is a schematic structural diagram of a server according to an embodiment of the present invention. FIG. 9 illustrates a block diagram of an exemplary server 9 suitable for use in implementing embodiments of the present invention. The server 9 shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 9, the server 9 is represented in the form of a general-purpose computing server. The components of the server 9 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
The server 9 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by server 9 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. The server 9 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 9, and commonly referred to as a "hard drive"). Although not shown in FIG. 9, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. System memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
The server 9 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with the server 9, and/or with any devices (e.g., network card, modem, etc.) that enable the server 9 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the server 9 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 20. As shown in fig. 9, the network adapter 20 communicates with the other modules of the server 9 via the bus 18. It should be appreciated that although not shown in FIG. 9, other hardware and/or software modules may be used in conjunction with the server 9, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and page displays by running programs stored in the system memory 28, for example, to implement the data processing method provided by the embodiment, the method includes:
determining position information, length information and width information of each sensing target according to a sensing data set acquired from sensing equipment, and determining distance information between each sensing target and the sensing equipment based on the position information of each sensing target;
determining position information optimization parameters of the perception targets according to the distance information;
optimizing the position information of each sensing target based on the position information optimization parameters of each sensing target to obtain optimized position information, and determining the sensing area of each sensing target according to the optimized position information, the length information and the width information of each sensing target;
determining an external region of a target vehicle according to a vehicle data set acquired from the target vehicle;
when the intersection of the external region of any target vehicle and the perception region of any perception target is determined, determining the degree of coincidence;
determining whether the perception target and the target vehicle correspond based on the degree of coincidence.
Of course, those skilled in the art can understand that the processor can also implement the technical solution of the data processing method provided by any embodiment of the present invention.
An embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a data processing method provided in the embodiment of the present invention, for example, where the method includes:
determining position information, length information and width information of each sensing target according to a sensing data set acquired from sensing equipment, and determining distance information between each sensing target and the sensing equipment based on the position information of each sensing target;
determining position information optimization parameters of the sensing targets according to the distance information;
optimizing the position information of each sensing target based on the position information optimization parameters of each sensing target to obtain optimized position information, and determining a sensing area of each sensing target according to the optimized position information, the length information and the width information of each sensing target;
determining an external area of a target vehicle according to a vehicle data set acquired from the target vehicle;
when the intersection of the external region of any target vehicle and the perception region of any perception target is determined, determining the degree of coincidence;
determining whether the perception target and the target vehicle correspond based on the degree of coincidence.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer-readable storage medium may be, for example but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It will be understood by those skilled in the art that the modules or steps of the present invention described above can be implemented by a general purpose computing device, they can be centralized in a single computing device or distributed over a network of multiple computing devices, and they can alternatively be implemented by program code executable by a computing device, so that they can be stored in a storage device and executed by a computing device, or they can be separately fabricated into various integrated circuit modules, or multiple modules or steps thereof can be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
In addition, the technical scheme of the invention conforms to the relevant regulations of national laws and regulations in terms of data acquisition, storage, use, processing and the like.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments illustrated herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A data processing method, comprising:
determining position information, length information and width information of each sensing target according to a sensing data set acquired from sensing equipment, and determining distance information between each sensing target and the sensing equipment based on the position information of each sensing target;
determining position information optimization parameters of the perception targets according to the distance information;
optimizing the position information of each sensing target based on the position information optimization parameters of each sensing target to obtain optimized position information, and determining the sensing area of each sensing target according to the optimized position information, the length information and the width information of each sensing target;
determining an external region of a target vehicle according to a vehicle data set acquired from the target vehicle;
when the intersection of the circumscribed area of any target vehicle and the perception area of any perception target is determined, determining the degree of coincidence;
determining whether the perception target and the target vehicle correspond based on the degree of coincidence.
2. The data processing method of claim 1, wherein determining location information optimization parameters of each of the perceptual objects according to the distance information comprises:
and determining the region information to which the perception target belongs according to the distance information, and determining the position information optimization parameter of the perception target according to the region information.
3. The data processing method of claim 1, wherein when it is determined that there is an intersection between the circumscribed area of any one of the target vehicles and the perception area of any one of the perception targets, determining the degree of coincidence comprises:
when the intersection of the external region of any target vehicle and the sensing region of any sensing target is determined, determining the intersection area;
and determining the coincidence degree according to the intersection area and the sensing area corresponding to the sensing area or the circumscribed area corresponding to the circumscribed area.
4. The data processing method of claim 1, further comprising:
and determining the optimal parameter of the coincidence degree of each perception target according to the distance information.
5. The data processing method of claim 4, wherein determining whether the perception target and the target vehicle correspond based on the degree of coincidence comprises:
if the coincidence degree is larger than a first preset threshold value, determining that the perception target corresponds to the target vehicle;
if the coincidence degree is smaller than or equal to the first preset threshold and larger than a second preset threshold, updating the position information, the length information and the width information of the perception target based on the previous perception data and/or the next perception data in the perception data set to obtain first target perception data;
determining the optimized coincidence degree of a first sensing area corresponding to the first target sensing data and an external area of the target vehicle according to the coincidence degree optimized parameter;
and if the optimized coincidence degree is greater than the first preset threshold value, determining that the perception target corresponds to the target vehicle.
6. The data processing method according to claim 5, wherein determining the optimized overlapping degree of the first sensing area corresponding to the first target sensing data and the circumscribed area of the target vehicle according to the overlapping degree optimization parameter comprises:
determining a substantial degree of coincidence of the first perception region and the circumscribing region;
and determining the optimized coincidence degree according to the coincidence degree optimization parameter and the basic coincidence degree.
7. The data processing method of claim 6, wherein determining a substantial degree of coincidence between the first perception area and the circumscribing area comprises:
determining the intersection area of the first sensing area and the circumscribed area, and determining the basic coincidence degree according to the intersection area and the sensing area corresponding to the first sensing area or the circumscribed area corresponding to the circumscribed area.
8. A data processing apparatus, characterized by comprising:
the distance determining module is used for determining the position information, the length information and the width information of each perception target according to a perception data set acquired from a perception device, and determining the distance information between each perception target and the perception device based on the position information of each perception target;
the first optimization parameter determining module is used for determining position information optimization parameters of the sensing targets according to the distance information;
a sensing region determining module, configured to optimize the position information of each sensing target based on the position information optimization parameter of each sensing target to obtain optimized position information, and determine a sensing region of each sensing target according to the optimized position information, the length information, and the width information of each sensing target;
the external region determining module is used for determining an external region of a target vehicle according to a vehicle data set acquired from the target vehicle;
the coincidence degree determining module is used for determining the coincidence degree when the intersection of the circumscribed area of any target vehicle and the sensing area of any sensing target is determined;
and the execution module is used for determining whether the perception target corresponds to the target vehicle or not based on the coincidence degree.
9. A server, characterized in that the server comprises: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the data processing method of any one of claims 1-7.
10. A storage medium containing computer-executable instructions for performing the data processing method of any one of claims 1-7 when executed by a computer processor.
CN202210835622.0A 2022-07-15 2022-07-15 Data processing method, device, server and storage medium Pending CN115168330A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210835622.0A CN115168330A (en) 2022-07-15 2022-07-15 Data processing method, device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210835622.0A CN115168330A (en) 2022-07-15 2022-07-15 Data processing method, device, server and storage medium

Publications (1)

Publication Number Publication Date
CN115168330A true CN115168330A (en) 2022-10-11

Family

ID=83494945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210835622.0A Pending CN115168330A (en) 2022-07-15 2022-07-15 Data processing method, device, server and storage medium

Country Status (1)

Country Link
CN (1) CN115168330A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117934805A (en) * 2024-03-25 2024-04-26 腾讯科技(深圳)有限公司 Object screening method and device, storage medium and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117934805A (en) * 2024-03-25 2024-04-26 腾讯科技(深圳)有限公司 Object screening method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN109143207B (en) Laser radar internal reference precision verification method, device, equipment and medium
JP7314213B2 (en) Vehicle positioning method, apparatus, electronic device, storage medium and program
CN111583668B (en) Traffic jam detection method and device, electronic equipment and storage medium
CN110287276A (en) High-precision map updating method, device and storage medium
CN109931945B (en) AR navigation method, device, equipment and storage medium
CN109870698B (en) Ultrasonic array obstacle detection result processing method and system
CN111383464B (en) Vehicle lane change recognition method and device, electronic equipment and medium
CN111860228A (en) Method, device, equipment and storage medium for autonomous parking
CN109813332B (en) Method and device for adding virtual guide line
CN113537362A (en) Perception fusion method, device, equipment and medium based on vehicle-road cooperation
CN112885130B (en) Method and device for presenting road information
CN110186472B (en) Vehicle yaw detection method, computer device, storage medium, and vehicle system
CN113722342A (en) High-precision map element change detection method, device and equipment and automatic driving vehicle
CN117128979A (en) Multi-sensor fusion method and device, electronic equipment and storage medium
CN115168330A (en) Data processing method, device, server and storage medium
CN117111052A (en) Target vehicle tracking method, system, target vehicle tracking device and storage medium
CN116091645A (en) Method and device for generating virtual line segment of lane
CN115168329A (en) Data processing method, device, server and storage medium
CN115657494A (en) Virtual object simulation method, device, equipment and storage medium
CN115393827A (en) Traffic signal lamp state identification method and system, electronic equipment and storage medium
CN114440905A (en) Intermediate layer construction method and device, electronic equipment and storage medium
CN114281832A (en) High-precision map data updating method and device based on positioning result and electronic equipment
CN114743395A (en) Signal lamp detection method, device, equipment and medium
CN116878487B (en) Method and device for establishing automatic driving map, vehicle and server
CN112925867B (en) Method and device for acquiring positioning truth value and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination