CN112687107B - Perception data acquisition method and device - Google Patents

Perception data acquisition method and device Download PDF

Info

Publication number
CN112687107B
CN112687107B CN202110273240.9A CN202110273240A CN112687107B CN 112687107 B CN112687107 B CN 112687107B CN 202110273240 A CN202110273240 A CN 202110273240A CN 112687107 B CN112687107 B CN 112687107B
Authority
CN
China
Prior art keywords
sensing
perception
data
target
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110273240.9A
Other languages
Chinese (zh)
Other versions
CN112687107A (en
Inventor
乔倚松
王劲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Transportation Technology Co.,Ltd.
Original Assignee
Ciic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ciic Technology Co ltd filed Critical Ciic Technology Co ltd
Priority to CN202110273240.9A priority Critical patent/CN112687107B/en
Publication of CN112687107A publication Critical patent/CN112687107A/en
Application granted granted Critical
Publication of CN112687107B publication Critical patent/CN112687107B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention provides a method and a device for acquiring perception data, wherein the method comprises the steps of acquiring relative position information and perception range information of a plurality of perception units to determine perception overlapping information of each perception unit and other perception units, acquiring total perception data of the perception units in a perception period from a cache unit corresponding to each perception unit, matching the total perception data in each cache unit according to the perception overlapping information to obtain sub-perception data corresponding to each perception object, generating a perception data set corresponding to each perception object, and finally processing the sub-perception data in the target perception data set to obtain target perception data of the target perception object. The invention realizes the fusion perception among different vehicles, between the vehicles and the roadside sensing units and among different roadside sensing units, overcomes the defect that the perception of a single vehicle is easy to have a perception blind area, enlarges the perception range, overcomes the delay problem in the data transmission of a distributed system and improves the quality of perception data.

Description

Perception data acquisition method and device
Technical Field
The invention relates to the technical field of automatic driving, in particular to a perception data acquisition method and device.
Background
At present, in the field of automatic driving, the environment such as obstacles is mainly sensed by a single vehicle, so that environment sensing data is obtained. However, when the single-vehicle sensing is adopted, a sensing blind area exists, so that the sensing data is not comprehensive enough, and when the sensing data of each vehicle is obtained, the problem of transmission delay of a distributed system also exists, and the obtained sensing data is not ideal enough.
Therefore, the existing sensing data acquisition method has the technical problem of poor sensing data quality, and needs to be improved.
Disclosure of Invention
The invention provides a sensing data acquisition method and a sensing data acquisition device, which are used for relieving the technical problem of poor quality of sensing data in the conventional sensing data acquisition method.
In order to solve the technical problems, the invention provides the following technical scheme:
the invention provides a perception data acquisition method, which comprises the following steps:
acquiring relative position information and sensing range information of a plurality of sensing units in a target area, wherein the sensing units comprise at least one of vehicle-mounted sensing units and roadside sensing units;
determining perception overlapping information of each perception unit and other perception units according to the relative position information and the perception range information;
acquiring total sensing data of all sensing objects in a sensing range by the sensing unit in a sensing period from a cache unit corresponding to each sensing unit;
according to the perception overlapping information, matching total perception data in each cache unit to obtain sub perception data corresponding to each perception object, and respectively generating a perception data set corresponding to each perception object according to the sub perception data;
and when the number of the sub-perception data in the target perception data set is larger than a preset value, processing the sub-perception data in the target perception data set to obtain the target perception data of the target perception object.
The invention also provides a sensing data acquisition device, which comprises:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring relative position information and sensing range information of a plurality of sensing units in a target area, and the sensing units comprise at least one of vehicle-mounted sensing units and roadside sensing units;
the determining module is used for determining perception overlapping information of each perception unit and other perception units according to the relative position information and the perception range information;
the second acquisition module is used for acquiring total sensing data of all sensing objects in a sensing range by the sensing units in a sensing period from the cache units corresponding to each sensing unit;
the matching module is used for matching the total sensing data in each cache unit according to the sensing overlapping information to obtain sub-sensing data corresponding to each sensing object, and respectively generating a sensing data set corresponding to each sensing object according to the sub-sensing data;
and the obtaining module is used for processing each sub-perception data in the target perception data set to obtain the target perception data of the target perception object when the number of the sub-perception data in the target perception data set is larger than a preset value.
The invention also provides an electronic device comprising a memory and a processor; the memory stores an application program, and the processor is configured to execute the application program in the memory to perform any one of the operations in the perceptual data acquisition method.
The present invention also provides a computer-readable storage medium having a computer program stored thereon, the computer program being executed by a processor to implement the perceptual data acquisition method as set forth in any one of the above.
Has the advantages that: the invention provides a perception data acquisition method and a device, the method firstly acquires relative position information and perception range information of a plurality of perception units in a target area, the perception units comprise at least one of vehicle-mounted perception units and roadside perception units, then determines perception overlapping information of each perception unit and other perception units according to the relative position information and the perception range information, then acquires total perception data of all perception objects in the perception range by the perception units in a perception period from a cache unit corresponding to each perception unit, matches the total perception data in each cache unit according to the perception overlapping information to obtain sub-perception data corresponding to each perception object, respectively generates a perception data set corresponding to each perception object according to the sub-perception data, and finally when the number of the sub-perception data in the target perception data set is larger than a preset value, and processing each sub-sensing data in the target sensing data set to obtain target sensing data of a target sensing object. According to the invention, the total sensing data of each sensing unit is matched according to the sensing overlapping information of the plurality of sensing units, so that the sensing data sensed by the plurality of sensing units under different types and different installation positions can be obtained for each sensing object positioned in a sensing overlapping area, and the plurality of sensing units comprise at least one of vehicle-mounted sensing units and roadside sensing units, so that the fusion sensing among different vehicles, between the vehicles and the roadside sensing units and among different roadside sensing units can be realized, the defect that the sensing blind area is easy to appear in the single-vehicle sensing is overcome, and the sensing range is enlarged; in addition, each sensing unit is provided with an independent cache unit for storing corresponding total sensing data, so that the real-time storage of the total sensing data can be realized, the delay problem in data transmission of a distributed system is solved, and the quality of the sensing data is improved by the interaction of the sensing units and the delay unit.
Drawings
The technical solution and other advantages of the present invention will become apparent from the following detailed description of specific embodiments of the present invention, which is to be read in connection with the accompanying drawings.
Fig. 1 is a schematic view of a scene to which the perceptual data acquisition method provided by the present invention is applicable.
Fig. 2 is a schematic flow chart of a sensing data acquisition method provided by the present invention.
FIG. 3 is a schematic diagram of the distribution of sensing units in the target region according to the present invention.
Fig. 4 is a schematic node diagram of the distribution positions of the sensing units in fig. 3.
Fig. 5 is a schematic diagram of the total sensing data matching of each first-type sensing unit in the present invention.
Fig. 6 is a schematic structural diagram of a sensing data acquisition device provided in the present invention.
Fig. 7 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
The technical solution of the present invention will be clearly and completely described below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a sensing data acquisition method and a sensing data acquisition device, which are used for relieving the technical problem of poor quality of sensing data in the conventional sensing data acquisition method.
Referring to fig. 1, fig. 1 is a schematic view of a scenario in which the method for acquiring sensing data according to the present invention is applicable, where the scenario may include terminals and servers, and the terminals, the servers, and the terminals and the servers are connected and communicated through the internet formed by various gateways, and the like, where the application scenario includes a sensing unit 11 and a server 12; wherein:
the sensing unit 11 comprises at least one of a vehicle-mounted sensing unit and a roadside sensing unit, wherein the vehicle-mounted sensing unit is a vehicle-mounted sensor installed on an automatic driving vehicle or a manned vehicle, the roadside sensing unit is a roadside sensor arranged on two sides of a lane in cooperation with the vehicle and the road, the vehicle-mounted sensor and the roadside sensor can respectively comprise a camera, a GPS antenna, a laser radar, a millimeter wave radar and the like, accurate collection of relevant data such as various lanes, vehicles, obstacles and the like on the road can be achieved, each sensing unit 11 is provided with an independent cache unit, and the data collected by the sensing unit 11 is cached in each cache unit in real time;
the server 12 includes a local server and/or a remote server, etc.
The sensing unit 11 and the server 12 are located in a wireless network or a wired network to realize data interaction between the two, wherein:
the server 12 first obtains relative position information and sensing range information of a plurality of sensing units 11 in a target area, then determines sensing overlapping information of each sensing unit 11 and other sensing units 11 according to the relative position information and the sensing range information, then obtains total sensing data of all sensing objects in the sensing range of the sensing unit 11 in a sensing period from a cache unit corresponding to each sensing unit 11, matches the total sensing data in each cache unit according to the sensing overlapping information to obtain sub-sensing data corresponding to each sensing object, respectively generates a sensing data set corresponding to each sensing object according to the sub-sensing data, and finally processes the sub-sensing data in the target sensing data set when the number of the sub-sensing data in the target sensing data set is greater than a preset value to obtain target sensing data of the target sensing object.
It should be noted that the system scenario diagram shown in fig. 1 is only an example, the server and the scenario described in the present invention are for more clearly illustrating the technical solution of the present invention, and do not constitute a limitation to the technical solution provided by the present invention, and it is known to those skilled in the art that as the system evolves and a new service scenario appears, the technical solution provided by the present invention is also applicable to similar technical problems. The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
Referring to fig. 2, fig. 2 is a schematic flow chart of a sensing data obtaining method provided by the present invention, the method including:
s201: the method comprises the steps of obtaining relative position information and sensing range information of a plurality of sensing units in a target area, wherein the sensing units comprise at least one of vehicle-mounted sensing units and roadside sensing units.
The target area is an area needing to acquire sensing data, the target area comprises at least one lane, a plurality of sensing units are arranged in the target area, and each sensing unit comprises at least one of a vehicle-mounted sensing unit and a roadside sensing unit, namely the target area can only comprise a plurality of vehicle-mounted sensing units, can also only comprise a plurality of roadside sensing units, and can also simultaneously comprise the vehicle-mounted sensing units and the roadside sensing units. The vehicle-mounted sensing unit is a vehicle-mounted sensor mounted on an automatic driving vehicle or a manned vehicle, the roadside sensing unit is a roadside sensor arranged on one side or two sides of a lane in the vehicle-road cooperation, the vehicle-mounted sensor and the roadside sensor can comprise a camera, a laser radar, a millimeter wave radar and the like, and the accurate collection of environment-related data of various lanes, vehicles, obstacles and the like on the road can be realized.
When the sensing data of the target area is acquired, the relative position information and the sensing range information of a plurality of sensing units are acquired.
There are various ways in which the relative position information of the sensing units within the target area can be obtained. The roadside sensing units are generally fixed in position, and the position information of each roadside sensing unit can be directly obtained from the prior map, so that the relative position information of different roadside sensing units can be determined. For the vehicle-mounted sensing units, the real-time position of the vehicle-mounted sensing units is changed in real time along with the running of the vehicle, the real-time position information of each vehicle can be obtained by real-time positioning through a vehicle-mounted GPS or a laser radar, and then the relative position information of each vehicle-mounted sensing unit is obtained through calculation. According to the known position information of the roadside sensing unit and the vehicle-mounted sensing unit, the relative position information of the vehicle-mounted sensing unit and the roadside sensing unit can be obtained through calculation. It should be noted that, the above are only some conventional manners for acquiring the relative position information of each sensing unit, but the present invention is not limited thereto, and the relative position information of a plurality of sensing units may also be acquired in other manners, such as direct measurement, estimation, and the like, and any technical means capable of acquiring the relative position information falls within the scope of the present invention.
Each perception unit has a corresponding perception range, and can perceive each perception object in the perception range so as to acquire perception data. All targets appearing in the sensing area of each sensing unit can be used as sensing objects, including but not limited to obstacles, traffic indication marks, vehicles, vehicle-mounted sensing units, roadside sensing units and the like. The corresponding sensing range is different according to the type of the sensing unit, for example, the sensing range of the camera may be a fan-shaped area, the sensing range of the radar may be a fan-shaped or circular area, and the like. The sensing units of the same type also have different sensing ranges due to different manufacturing parameters, different installation parameters and the like. Therefore, when obtaining the sensing range information of each sensing unit, multiple factors need to be considered comprehensively to obtain a more accurate sensing range.
S202: and determining perception overlapping information of each perception unit and other perception units according to the relative position information and the perception range information.
After the relative position information of the sensing units is obtained, the distance between any two sensing units can be known, the distance is compared with the sensing range of each sensing unit, and the sensing overlapping information of each sensing unit and other sensing units can be determined, wherein the sensing overlapping information comprises whether the sensing ranges of the sensing units are overlapped or not. After determining the sensing overlapping information, in order to more clearly show the relationship between the sensing units, an adjacent matrix of the sensing units can be constructed, each vertex in the adjacent matrix represents one sensing unit, and one of the two verticesThe communication relationship between the two sensing units is represented by the distance between the sensing units, and if the sensing ranges of the two sensing units are not overlapped, the communication relationship is set as
Figure 524609DEST_PATH_IMAGE001
As shown in fig. 3, the target area includes a target lane 100, and further includes a first sensing unit 101, a second sensing unit 102, a third sensing unit 103, and a fourth sensing unit 104, where the first sensing unit 101, the second sensing unit 102, and the fourth sensing unit 104 are roadside sensing units disposed at the side of the target lane 100, and the third sensing unit 103 is a vehicle-mounted sensing unit in the target lane 100. In addition, a first perception object 21, a second perception object 22, a third perception object 23 and a fourth perception object 24 are further arranged in the target lane 100, wherein the first perception object 21 and the second perception object 22 are pedestrians, the third perception object 23 is a road cone, the fourth perception object 24 is a vehicle, and the third perception unit 103 is arranged on the fourth perception object 24.
After the relative position information and the sensing range information of each sensing unit are obtained in the above manner, the distribution positions of each sensing unit in fig. 3 are shown in fig. 4 in a node manner, and then four vertexes R1, R2, R3, and R4 in fig. 4 respectively represent the first sensing unit 101, the second sensing unit 102, the third sensing unit 103, and the fourth sensing unit 104. Constructing a adjacency matrix of individual sensing units
Figure 848274DEST_PATH_IMAGE002
Is provided with an adjacent matrix
Figure 159169DEST_PATH_IMAGE003
Figure 957974DEST_PATH_IMAGE004
Figure 303504DEST_PATH_IMAGE005
Figure 532492DEST_PATH_IMAGE006
1 to 4 in (b) represent four sensing units R1 to R4,
Figure 330683DEST_PATH_IMAGE007
in (1)
Figure 936108DEST_PATH_IMAGE008
Indicating that there is overlap of the sensing ranges of R1 and R2, and the distance between R1 and R2 is 100m,
Figure 136145DEST_PATH_IMAGE009
indicating that there is overlap of the sensing ranges of R1 and R3, and the distance between R1 and R3 is 60m,
Figure 801613DEST_PATH_IMAGE010
indicating that the sensing ranges of R1 and R4 do not overlap,
Figure 87101DEST_PATH_IMAGE011
indicating that there is overlap of the sensing ranges of R2 and R3, and the distance between R2 and R3 is 50m,
Figure 230637DEST_PATH_IMAGE012
indicating that the sensing ranges of R2 and R4 do not overlap,
Figure 285181DEST_PATH_IMAGE013
indicating that the perception ranges of R3 and R4 do not overlap. By constructing the adjacency matrix, the relationship between the sensing units can be more clearly shown.
S203: and acquiring the total sensing data of all sensing objects in the sensing range by the sensing units in the sensing period from the cache units corresponding to each sensing unit.
In the invention, each sensing unit is provided with an independent cache unit, and the total sensing data of all sensing objects in the sensing range of each sensing unit is stored into the corresponding cache unit in real time, and then the total sensing data in the current sensing period is taken out from each cache unit at a preset frequency. In some current distributed systems, data acquired by each node needs to be concentrated on the cloud and then processed in a concentrated manner, and due to different distances between the nodes, the time required by data transmission also differs, the time for the data to arrive differs, and the delay of data transmission of some nodes affects subsequent data calculation. The invention separately sets the buffer unit for each sensing unit, which can store the sensing data in real time without storing after long-distance transmission, thereby overcoming the delay problem of the distributed system and ensuring the timeliness, integrity and accuracy of the data.
In one embodiment, S203 specifically includes: acquiring a target sensing period of a target sensing unit; and acquiring total sensing data of all sensing units in a target sensing period at a preset frequency. When the total sensing data is obtained from each cache unit, a target sensing unit is determined first, and the target sensing unit can be any one sensing unit in a target area and can be selected according to actual needs. For each sensing unit, a sensing period T is set, and in each sensing period T, the sensing unit needs to complete a preset sensing operation on the surrounding environment to acquire effective sensing data, for example, a camera needs to take n photographs of the environment in a sensing range in one sensing period T to acquire relatively complete sensing data. After the target sensing unit is selected, all the total sensing data in the current target sensing period are acquired from the cache unit of the target sensing unit by taking the target sensing period of the target sensing unit as a reference, all the total sensing data in the same time period are acquired from the cache units of other sensing units, and all the total sensing data of all the sensing units in each target sensing period are sequentially acquired at a preset frequency. The total sensing data of all the sensing units in the same time period is obtained, so that the sensing data of all the sensing objects in the sensing range of the target sensing unit can be more complete, and the effective fusion of the subsequent sensing data is ensured.
S204: and matching the total perception data in each cache unit according to the perception overlapping information to obtain sub-perception data corresponding to each perception object, and respectively generating a perception data set corresponding to each perception object according to the sub-perception data.
As shown in fig. 3, assuming that the sensing objects in the sensing range of the first sensing unit 101 include a first sensing object 21 and a fourth sensing object 24, the sensing objects in the sensing range of the second sensing unit 102 include a first sensing object 21, a fourth sensing object 24 and a third sensing object 23, the sensing objects in the sensing range of the third sensing unit 103 include a first sensing object 21 and a third sensing object 23, the sensing objects in the sensing range of the fourth sensing unit 104 include a second sensing object 22, the total sensing data in the buffer unit of the first sensing unit 101 includes two sub-sensing data for the first sensing object 21 and for the fourth sensing object 24, the total sensing data in the buffer unit of the second sensing unit 102 includes three sub-sensing data for the first sensing object 21, for the fourth sensing object 24 and for the third sensing object 23, the other sensing units are analogized in turn. The sub-perception data of each perception object can include various data information such as position, size, category, orientation, speed, acceleration and the like of the perception object.
Whether the perception ranges of all perception units are overlapped or not can be determined through the perception overlapping information of all perception units obtained in the steps, when the perception ranges are overlapped, at least two perception units perceive the perception objects in the overlapped area at the same time, and the sub-perception data of the perception objects are obtained respectively; for the perception objects which are not in the overlapping area, only one perception unit perceives the perception objects at the same time, and only one part of perception data is obtained. Because the cache units of the sensing units store the total sensing data of the sensing objects in the respective sensing ranges, when the sensing data of each sensing object in the target area is to be acquired, the total sensing data of the sensing units needs to be matched first, so that the sensing units can sense a certain sensing object at the same time, and the sensing units can be concentrated to generate a corresponding sensing data set.
In one embodiment, S204 specifically includes: determining a first type of sensing unit which is overlapped with the sensing areas of other sensing units and a second type of sensing unit which is not overlapped with the sensing areas of other sensing units from all sensing units according to the sensing overlapping information; the method comprises the steps that overall perception data of first-class perception units with any two perception areas overlapped are matched, and first sub-perception data corresponding to first-class perception objects are obtained; and directly acquiring second sub-sensing data corresponding to each second-class sensing object from the overall sensing data of each second-class sensing unit. For each sensing unit distributed in fig. 3, according to the sensing overlap information, it can be determined that the first sensing unit 101, the second sensing unit 102, and the third sensing unit 103 are the first type of sensing unit, and the fourth sensing unit 104 is the second type of sensing unit. All the perception objects perceived in the perception range of the first type perception unit are first type perception objects, and all the perception objects perceived in the perception range of the second type perception unit are second type perception objects.
For three first-class sensing units, two sensing overlapping regions exist, so that the adjacent matrix is traversed
Figure 121550DEST_PATH_IMAGE002
The total sensing data of the two overlapped first-class sensing units are matched, and the total sensing data in the two first-class sensing units can be reclassified and combined according to different sensing objects so as to obtain all sub-sensing data corresponding to each first-class sensing object. During matching, a Hungarian algorithm, a KM matching algorithm and the like can be adopted to perform target matching on the sensing results from two different sources, and a sensing result information array is constructed for each sensing object according to the matching result
Figure 628755DEST_PATH_IMAGE014
Figure 310403DEST_PATH_IMAGE015
The information array is a sensing data set which stores the sensing results of different sensing units on the same sensing object at the same time, wherein
Figure 750611DEST_PATH_IMAGE016
The number of each sensing unit and each sensing object can be preset.
Taking the matching of the first sensing unit 101, the second sensing unit 102 and the third sensing unit 103 in fig. 3 as an example, as shown in fig. 5, the first sensing unit 101 has a first buffer unit 51, the second sensing unit 102 has a second buffer unit 52, the third sensing unit 103 has a third buffer unit 53, the total sensing data in the first buffer unit 51 includes first sub sensing data 511 to a first sensing object and first sub sensing data 512 to a fourth sensing object, the total sensing data in the second buffer unit 52 includes second sub sensing data 521 to the first sensing object, second sub sensing data 522 to the fourth sensing object, and first sub sensing data 523 to a third sensing object, and the total sensing data in the third buffer unit 53 includes third sub sensing data 531 to the first sensing object and second sub sensing data 532 to the third sensing object. A in fig. 5, b in fig. 5, and c in fig. 5 are results of pairwise matching of three sensing units, respectively, and after matching, a first information array for the first sensing object 21 may be constructed respectively
Figure 226723DEST_PATH_IMAGE017
Second array of information for a fourth perceptual object 24
Figure 752383DEST_PATH_IMAGE018
And a third array of information for a third perceptual object 23
Figure 992650DEST_PATH_IMAGE019
Wherein
Figure 287365DEST_PATH_IMAGE017
Has 3 sub-sensing data in it,
Figure 465537DEST_PATH_IMAGE018
has 2 sub-perception data in it,
Figure 947334DEST_PATH_IMAGE019
with 2 sub-senses.
Through the steps, the overall sensing data of the first sensing units with overlapping two sensing areas are matched to obtain the first sub sensing data corresponding to each first sensing object, and further obtain the sensing data set corresponding to each first sensing object.
For the fourth sensing unit 104, since it is a second-class sensing unit, and for all second-class sensing objects in the sensing range, no other sensing units can sense at the same time, that is, for each second-class sensing object, only one sub-sensing data is generated at the same time, so that the sub-sensing data of the second sensing object 22 can be directly obtained from the buffer unit of the fourth sensing unit 104, and the information packet constructed for the second sensing object 22 is grouped
Figure 236364DEST_PATH_IMAGE020
The number of the partial perception data in (1).
It should be noted that, in the above embodiment, after the first sensing unit 101, the second sensing unit 102, and the third sensing unit 103 are matched in pairs, all the first type sensing objects fall in the overlapping area as an example for description. When the position of a first type of sensing object changes, for example, the third sensing object 23 moves a certain distance to the upper right side along the target lane 100 in fig. 3, so that the second sensing unit 102 can still sense the third sensing object 23, but the third sensing unit 103 cannot sense, in the matching process, the sub-sensing data of the third sensing object 23 in the second sensing unit 102 and no other sub-sensing data that can be matched exist in any other sensing unit, and the sub-sensing data of the third sensing object can be directly obtained from the second cache unit 52 and a corresponding information array is constructed.
In an embodiment, the step of matching the overall sensing data of the first type sensing unit in which any two sensing areas overlap specifically includes: acquiring type parameters of each first type sensing unit; determining matching similarity parameters of the first type sensing units with overlapped sensing areas according to the type parameters; and matching the overall sensing data of the first type sensing units with any two sensing areas overlapped on each other based on the matching similarity parameters. The type parameters comprise specific types of sensors such as a camera, a laser radar and a millimeter wave radar, the matching similarity parameters comprise position information, size information, category information, orientation information, speed information, acceleration information and the like of a sensing object, for different types of sensing units, due to different sensing principles, the matching similarity parameters required during matching are different, for example, the laser radar mainly obtains the position of the sensing object through ranging, when the two sensing units are the laser radar, the position information of the sensing object is used as the matching similarity parameters to perform corresponding target matching, and then two sub-sensing data with the same position of the sensing object can be matched together; and for the millimeter wave radar as a sensing unit, the corresponding target matching can be carried out by using the speed as a matching similarity parameter, and two sub-sensing data with the same sensing object speed can be matched together. The invention firstly obtains the type parameters of each first-class sensing unit, then determines different matching similarity parameters based on different type parameters, and matches the first-class sensing units overlapped in pairs by using the matching similarity parameters.
S205: and when the number of the sub-perception data in the target perception data set is larger than a preset value, processing the sub-perception data in the target perception data set to obtain the target perception data of the target perception object.
Through the steps, corresponding perception data sets are generated for all perception objects in the target area, and the quantity of the sub-perception data in the perception data sets of all the perception objects is not completely the same according to different matching results. When the target sensing data of the target sensing object is required to be acquired, the number of the sub-sensing data in the target sensing data set is judged, a preset value is set to be 1, when the number of the sub-sensing data is larger than 1, the situation that at least two sensing units sense the target sensing object at the same moment and generate corresponding sub-sensing data is indicated, at the moment, the sub-sensing data in the target sensing data set needs to be processed, and the required target sensing data is determined.
In one embodiment, S205 specifically includes: acquiring distance information between a sensing unit corresponding to each sub-sensing data in a target sensing data set and a target sensing object; determining a sensing unit closest to the target sensing object as a target sensing unit according to the distance information; and determining the sub-perception data of the target perception unit as target perception data of the target perception object. Different sensing units sense a target sensing object and can obtain corresponding sub-sensing data, but because the distances between the different sensing units and the target sensing object are different, the data volume, definition and the like of the corresponding sub-sensing data are different, and the quality of the obtained sub-sensing data is better for the sensing unit closer to the target sensing object, so that the distance information between each sensing unit and the target sensing object can be obtained first, and the sub-sensing data of the sensing unit closest to the target sensing object is used as the target sensing data.
In an embodiment, S205 further includes: performing fusion processing on each sub-sensing data in the target sensing data set; and generating target perception data of the target perception object according to the fusion result. The distance information of the sensing units corresponding to the sub-sensing data in the target sensing data set and the target sensing object can be obtained firstly during fusion, then the reciprocal of the distance is used as the weight to conduct weighted fusion on the sub-sensing data, after the sub-sensing data in the target sensing data set is fused, the target sensing data can be obtained, and finally presented target sensing data are the result of common sensing of a plurality of sensing units, so that the target sensing data is more accurate and comprehensive compared with single sensing and has reference value.
In an embodiment, before the step of performing fusion processing on the respective partial sensing data in the target sensing data set, the method further includes: acquiring perception synchronization information of each perception unit corresponding to a target perception data set; determining a target filtering method according to the perception synchronization information; and filtering the sub-sensing data in the target sensing data set by a target filtering method. The filtering processing is carried out on each sub-sensing data in the sensing data set, so that interference signals in each sub-sensing data can be removed, and more accurate and effective sensing data can be obtained.
In the same sensing period, the sensing operation on the target sensing object may be synchronous or asynchronous, for example, two cameras take pictures of the target sensing object, the shooting time of the two cameras is 3 th second in the sensing period, the two cameras may be considered as sensing synchronization, and if the shooting time of one camera is 1 st second in the sensing period and the shooting time of the other camera is 3 rd second, the two cameras may be considered as asynchronous. And selecting different filtering methods according to the perception synchronous information of each perception unit. When synchronizing, different filtering methods, such as median filtering, mean filtering, etc., can be selected according to different types of target perception objects, such as pedestrians, road signs, vehicles, etc.; when the sensing units are not synchronous, the Kalman filtering method can be used for smoothing the sub-sensing data of different sensing units at different moments, so that the problem of inconsistent sensing frequencies of different sensing units can be solved.
In one embodiment, after S204, the method further includes: and when the number of the sub-perception data in the target perception data set is not more than a preset value, directly determining the sub-perception data in the target perception data set as the target perception data. When the sub-perception data in the target perception data set is not larger than a preset value 1, the target perception object is represented to have only one sub-perception data, and the sub-perception data is directly taken out from the target perception data set to serve as the target perception data.
In one embodiment, after S205, the method further includes: receiving a sensing data acquisition request; and sending target sensing data of the target sensing object corresponding to the sensing data acquisition request at a preset frequency. The final perception data obtained after all perception objects in the perception period are processed can be obtained through the steps, when a certain requesting party wants to acquire the perception data, a perception data acquisition request is firstly sent, the requesting party can be a certain vehicle in the target area, and can also be other external terminals or servers and the like, the request may carry an object identification of the identified perceptual object, where the identified perceptual object may be a certain perceptual object, some perceptual objects, or all perceptual objects, after receiving the sensing data acquisition request, the server of the system sends target sensing data of the target sensing objects to the requester at a preset frequency, the final sensing data obtained after the sensing objects with the sensing data indexes are calibrated are processed, and the requester analyzes and processes the target sensing data, so that more accurate and comprehensive sensing results of the target sensing objects can be obtained.
It should be noted that, in the above embodiments, the roadside sensing unit and the vehicle-mounted sensing unit are simultaneously disposed in the target area, but the method is also applicable to a scene in which only the roadside sensing unit is disposed or only the vehicle-mounted sensing unit is disposed, that is, the method for acquiring sensed data of the present invention has a wide applicability, and can acquire sensed data with high quality in various scenes in which a single vehicle or a vehicle and a road are coordinated.
According to the method, the total sensing data of the sensing units are matched according to the sensing overlapping information of the sensing units, so that the sensing data sensed by the sensing units under different types and different installation positions can be obtained for each sensing object in a sensing overlapping area, and the sensing units comprise at least one of vehicle-mounted sensing units and roadside sensing units, so that the fusion sensing among different vehicles, between the vehicles and the roadside sensing units and among different roadside sensing units can be realized, the defect that a sensing blind area is easy to appear in the single-vehicle sensing is overcome, and the sensing range is enlarged; in addition, each sensing unit is provided with an independent cache unit for storing corresponding total sensing data, so that the real-time storage of the total sensing data can be realized, the delay problem in data transmission of a distributed system is solved, and the quality of the sensing data is improved by the interaction of the sensing units and the delay unit.
Correspondingly, fig. 6 is a schematic structural diagram of the sensing data acquisition device provided by the present invention, please refer to fig. 6, the sensing data acquisition device includes:
the first obtaining module 110 is configured to obtain relative position information and sensing range information of a plurality of sensing units in a target area, where a sensing unit includes at least one of a vehicle-mounted sensing unit and a roadside sensing unit;
a determining module 120, configured to determine, according to the relative position information and the sensing range information, sensing overlapping information of each sensing unit and other sensing units;
a second obtaining module 130, configured to obtain, from the cache unit corresponding to each sensing unit, total sensing data of all sensing objects in the sensing range by the sensing unit in the sensing period;
the matching module 140 is configured to match the total sensing data in each cache unit according to the sensing overlapping information to obtain sub-sensing data corresponding to each sensing object, and generate a sensing data set corresponding to each sensing object according to the sub-sensing data;
the obtaining module 150 is configured to process each piece of sensing data in the target sensing data set to obtain target sensing data of the target sensing object when the number of the piece of sensing data in the target sensing data set is greater than a preset value.
In one embodiment, the second obtaining module 130 includes:
the first acquisition submodule is used for acquiring a target sensing period of the target sensing unit;
and the second acquisition sub-module is used for acquiring total sensing data of all the sensing units in the target sensing period at a preset frequency.
In one embodiment, the matching module 140 includes:
the first determining submodule is used for determining a first type of sensing unit which is overlapped with the sensing areas of other sensing units and a second type of sensing unit which is not overlapped with the sensing areas of other sensing units from all sensing units according to the sensing overlapping information;
the first matching sub-module is used for matching the overall perception data of the first type perception units with any two overlapped perception areas to obtain first sub-perception data corresponding to each first type perception object;
and the third obtaining sub-module is used for directly obtaining second sub-sensing data corresponding to each second type sensing object from the overall sensing data of each second type sensing unit.
In one embodiment, the first matching sub-module is configured to obtain a type parameter of each first type sensing unit; determining matching similarity parameters of the first type sensing units with overlapped sensing areas according to the type parameters; and matching the overall sensing data of the first type sensing units with any two sensing areas overlapped on each other based on the matching similarity parameters.
In one embodiment, the obtaining module 150 includes:
the fourth acquisition submodule is used for acquiring distance information between the sensing units corresponding to the sub-sensing data in the target sensing data set and the target sensing object;
the second determining submodule is used for determining the sensing unit closest to the target sensing object as the target sensing unit according to the distance information;
and the third determining submodule is used for determining the sub-sensing data of the target sensing unit as the target sensing data of the target sensing object.
In one embodiment, the obtaining module 150 further includes:
the fusion submodule is used for carrying out fusion processing on each sub-sensing data in the target sensing data set;
and the generation submodule generates target perception data of the target perception object according to the fusion result.
In one embodiment, the obtaining module 150 further comprises:
the fifth acquisition submodule is used for acquiring the perception synchronization information of each perception unit corresponding to the target perception data set;
the fourth determining submodule is used for determining a target filtering method according to the perception synchronization information;
and the filtering submodule is used for filtering the sub-sensing data in the target sensing data set by a target filtering method.
In an embodiment, the sensing data obtaining apparatus further includes a first determining module operating after the matching module 140, and the first determining module is configured to directly determine the sub-sensing data in the target sensing data set as the target sensing data when the number of the sub-sensing data in the target sensing data set is not greater than a preset value.
In one embodiment, the sensory data acquisition device further comprises, operative after the obtaining module 150:
the receiving module is used for receiving a sensing data acquisition request;
and the sending module is used for sending the target sensing data of the target sensing object corresponding to the sensing data acquisition request at a preset frequency.
Different from the prior art, the sensing data acquisition device provided by the invention matches the total sensing data of each sensing unit according to the sensing overlapping information of the plurality of sensing units, so that the sensing data sensed by the plurality of sensing units under different types and different installation positions can be obtained for each sensing object positioned in a sensing overlapping area, and the plurality of sensing units comprise at least one of vehicle-mounted sensing units and roadside sensing units, so that the fusion sensing among different vehicles, vehicles and roadside sensing units and the fusion sensing among different roadside sensing units can be realized, the defect that a sensing blind area is easy to appear in single-vehicle sensing is overcome, and the sensing range is enlarged; in addition, each sensing unit is provided with an independent cache unit for storing corresponding total sensing data, so that the real-time storage of the total sensing data can be realized, the delay problem in data transmission of a distributed system is solved, and the quality of the sensing data is improved by the interaction of the sensing units and the delay unit.
Accordingly, the present invention also provides an electronic device, as shown in fig. 7, which may include radio frequency circuitry 701, a memory 702 including one or more computer-readable storage media, an input unit 703, a display unit 704, a sensor 705, audio circuitry 706, a WiFi module 707, a processor 708 including one or more processing cores, and a power supply 709. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 7 does not constitute a limitation of the electronic device and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. Wherein:
the rf circuit 701 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then sends the received downlink information to the one or more processors 708 for processing; in addition, data relating to uplink is transmitted to the base station. The memory 702 may be used to store software programs and modules, and the processor 708 executes various functional applications and data processing by operating the software programs and modules stored in the memory 702. The input unit 703 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
The display unit 704 may be used to display information input by or provided to a user and various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof.
The electronic device may also include at least one sensor 705, such as a light sensor, motion sensor, and other sensors. The audio circuitry 706 includes speakers that can provide an audio interface between the user and the electronic device.
WiFi belongs to short-range wireless transmission technology, and the electronic device can help the user send and receive e-mail, browse web pages, access streaming media, etc. through the WiFi module 707, which provides wireless broadband internet access for the user. Although fig. 7 shows the WiFi module 707, it is understood that it does not belong to the essential constitution of the electronic device, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 708 is a control center of the electronic device, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 702 and calling data stored in the memory 702, thereby performing overall monitoring of the mobile phone.
The electronic device also includes a power source 709 (e.g., a battery) for supplying power to various components, which may preferably be logically connected to the processor 708 via a power management system, such that functions of managing charging, discharging, and power consumption are performed via the power management system.
Although not shown, the electronic device may further include a camera, a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 708 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 702 according to the following instructions, and the processor 708 runs the application programs stored in the memory 702, so as to implement the following functions:
acquiring relative position information and sensing range information of a plurality of sensing units in a target area, wherein the sensing units comprise at least one of vehicle-mounted sensing units and roadside sensing units; determining perception overlapping information of each perception unit and other perception units according to the relative position information and the perception range information; acquiring total sensing data of all sensing objects in a sensing range by a sensing unit in a sensing period from a cache unit corresponding to each sensing unit; according to the perception overlapping information, matching the total perception data in each cache unit to obtain sub perception data corresponding to each perception object, and respectively generating a perception data set corresponding to each perception object according to the sub perception data; and when the number of the sub-perception data in the target perception data set is larger than a preset value, processing the sub-perception data in the target perception data set to obtain the target perception data of the target perception object.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present invention provides a computer readable storage medium having stored therein a plurality of instructions that are loadable by a processor to cause the following functions:
acquiring relative position information and sensing range information of a plurality of sensing units in a target area, wherein the sensing units comprise at least one of vehicle-mounted sensing units and roadside sensing units; determining perception overlapping information of each perception unit and other perception units according to the relative position information and the perception range information; acquiring total sensing data of all sensing objects in a sensing range by a sensing unit in a sensing period from a cache unit corresponding to each sensing unit; according to the perception overlapping information, matching the total perception data in each cache unit to obtain sub perception data corresponding to each perception object, and respectively generating a perception data set corresponding to each perception object according to the sub perception data; and when the number of the sub-perception data in the target perception data set is larger than a preset value, processing the sub-perception data in the target perception data set to obtain the target perception data of the target perception object.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps of any method provided by the present invention, the beneficial effects that any method provided by the present invention can achieve can be achieved, for details, see the foregoing embodiments, and are not described herein again.
The sensing data acquisition method, the sensing data acquisition device, the electronic device and the storage medium provided by the invention are described in detail, specific examples are applied in the description to explain the principle and the implementation of the invention, and the description of the embodiments is only used for helping to understand the technical scheme and the core idea of the invention; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A perception data acquisition method is characterized by comprising the following steps:
acquiring relative position information and sensing range information of a plurality of sensing units in a target area, wherein the sensing units comprise at least one of vehicle-mounted sensing units and roadside sensing units;
determining perception overlapping information of each perception unit and other perception units according to the relative position information and the perception range information;
acquiring total sensing data of all sensing objects in a sensing range by the sensing unit in a sensing period from a cache unit corresponding to each sensing unit;
according to the perception overlapping information, matching total perception data in each cache unit to obtain sub perception data corresponding to each perception object, and respectively generating a perception data set corresponding to each perception object according to the sub perception data; the method comprises the following steps: determining a first type of sensing unit overlapped with the sensing areas of other sensing units and a second type of sensing unit not overlapped with the sensing areas of other sensing units from all sensing units according to the sensing overlapping information; acquiring type parameters of each first-class sensing unit, determining matching similarity parameters of the first-class sensing units with overlapped any two sensing areas according to the type parameters, and matching total sensing data of the first-class sensing units with overlapped any two sensing areas based on the matching similarity parameters to obtain first sub-sensing data corresponding to each first-class sensing object; directly acquiring second sub-sensing data corresponding to each second type sensing object from the overall sensing data of each second type sensing unit;
and when the number of the sub-perception data in the target perception data set is larger than a preset value, processing the sub-perception data in the target perception data set to obtain the target perception data of the target perception object.
2. The method for acquiring sensing data according to claim 1, wherein the step of acquiring the total sensing data of all sensing objects in the sensing range by the sensing unit in the sensing period from the buffer unit corresponding to each sensing unit comprises:
acquiring a target sensing period of a target sensing unit;
and acquiring total sensing data of all sensing units in the target sensing period at a preset frequency.
3. The method for acquiring sensing data according to claim 1, wherein the step of processing the respective sub-sensing data in the target sensing data set to obtain target sensing data of a target sensing object comprises:
obtaining distance information between a sensing unit corresponding to each sub-sensing data in the target sensing data set and the target sensing object;
determining a sensing unit closest to the target sensing object as a target sensing unit according to the distance information;
and determining the sub-perception data of the target perception unit as the target perception data of the target perception object.
4. The method for acquiring sensing data according to claim 1, wherein the step of processing the respective sub-sensing data in the target sensing data set to obtain target sensing data of a target sensing object comprises:
performing fusion processing on each sub-sensing data in the target sensing data set;
and generating target perception data of the target perception object according to the fusion result.
5. The method for acquiring sensing data according to claim 4, wherein before the step of fusing the respective sub-sensing data in the target sensing data set, the method further comprises:
acquiring perception synchronization information of each perception unit corresponding to the target perception data set;
determining a target filtering method according to the perception synchronization information;
and filtering the sub-sensing data in the target sensing data set by the target filtering method.
6. The method for acquiring sensing data according to claim 1, wherein after the step of generating the sensing data sets corresponding to the respective sensing objects according to the sub-sensing data, the method further comprises:
and when the number of the sub-perception data in the target perception data set is not more than a preset value, directly determining the sub-perception data in the target perception data set as the target perception data.
7. The method for acquiring sensing data according to claim 1, wherein after the step of processing the respective sub-sensing data in the target sensing data set to obtain the target sensing data of the target sensing object, the method comprises:
receiving a sensing data acquisition request;
and sending the target sensing data of the target sensing object corresponding to the sensing data acquisition request at a preset frequency.
8. A perceptual data acquisition apparatus, the perceptual data acquisition apparatus comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring relative position information and sensing range information of a plurality of sensing units in a target area, and the sensing units comprise at least one of vehicle-mounted sensing units and roadside sensing units;
the determining module is used for determining perception overlapping information of each perception unit and other perception units according to the relative position information and the perception range information;
the second acquisition module is used for acquiring total sensing data of all sensing objects in a sensing range by the sensing units in a sensing period from the cache units corresponding to each sensing unit;
the matching module is used for matching the total sensing data in each cache unit according to the sensing overlapping information to obtain sub-sensing data corresponding to each sensing object, and respectively generating a sensing data set corresponding to each sensing object according to the sub-sensing data; the method comprises the following steps: determining a first type of sensing unit overlapped with the sensing areas of other sensing units and a second type of sensing unit not overlapped with the sensing areas of other sensing units from all sensing units according to the sensing overlapping information; acquiring type parameters of each first-class sensing unit, determining matching similarity parameters of the first-class sensing units with overlapped any two sensing areas according to the type parameters, and matching total sensing data of the first-class sensing units with overlapped any two sensing areas based on the matching similarity parameters to obtain first sub-sensing data corresponding to each first-class sensing object; directly acquiring second sub-sensing data corresponding to each second type sensing object from the overall sensing data of each second type sensing unit;
and the obtaining module is used for processing each sub-perception data in the target perception data set to obtain the target perception data of the target perception object when the number of the sub-perception data in the target perception data set is larger than a preset value.
CN202110273240.9A 2021-03-15 2021-03-15 Perception data acquisition method and device Active CN112687107B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110273240.9A CN112687107B (en) 2021-03-15 2021-03-15 Perception data acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110273240.9A CN112687107B (en) 2021-03-15 2021-03-15 Perception data acquisition method and device

Publications (2)

Publication Number Publication Date
CN112687107A CN112687107A (en) 2021-04-20
CN112687107B true CN112687107B (en) 2021-05-25

Family

ID=75455626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110273240.9A Active CN112687107B (en) 2021-03-15 2021-03-15 Perception data acquisition method and device

Country Status (1)

Country Link
CN (1) CN112687107B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116419144A (en) * 2021-12-29 2023-07-11 维沃移动通信有限公司 Method and device for determining perceived signal period, communication equipment and storage medium
CN115148023B (en) * 2022-06-23 2024-06-14 阿里云计算有限公司 Path fusion method and device and electronic equipment
CN115273473A (en) * 2022-07-29 2022-11-01 阿波罗智联(北京)科技有限公司 Method and device for processing perception information of road side equipment and automatic driving vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018110964A1 (en) * 2016-12-14 2018-06-21 Samsung Electronics Co., Ltd. Electronic device and method for recognizing object by using plurality of sensors
CN108646739A (en) * 2018-05-14 2018-10-12 北京智行者科技有限公司 A kind of sensor information fusion method
CN108922188A (en) * 2018-07-24 2018-11-30 河北德冠隆电子科技有限公司 The four-dimensional outdoor scene traffic of radar tracking positioning perceives early warning monitoring management system
CN109714730A (en) * 2019-02-01 2019-05-03 清华大学 For Che Che and bus or train route the cloud control plateform system cooperateed with and cooperative system and method
CN110412595A (en) * 2019-06-04 2019-11-05 深圳市速腾聚创科技有限公司 Roadbed cognitive method, system, vehicle, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018110964A1 (en) * 2016-12-14 2018-06-21 Samsung Electronics Co., Ltd. Electronic device and method for recognizing object by using plurality of sensors
CN108646739A (en) * 2018-05-14 2018-10-12 北京智行者科技有限公司 A kind of sensor information fusion method
CN108922188A (en) * 2018-07-24 2018-11-30 河北德冠隆电子科技有限公司 The four-dimensional outdoor scene traffic of radar tracking positioning perceives early warning monitoring management system
CN109714730A (en) * 2019-02-01 2019-05-03 清华大学 For Che Che and bus or train route the cloud control plateform system cooperateed with and cooperative system and method
CN110412595A (en) * 2019-06-04 2019-11-05 深圳市速腾聚创科技有限公司 Roadbed cognitive method, system, vehicle, equipment and storage medium

Also Published As

Publication number Publication date
CN112687107A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN112687107B (en) Perception data acquisition method and device
CN109817022B (en) Method, terminal, automobile and system for acquiring position of target object
CN110164135B (en) Positioning method, positioning device and positioning system
CN113420805B (en) Dynamic track image fusion method, device, equipment and medium for video and radar
CN110880236B (en) Road condition information processing method, device and system
US20190120964A1 (en) Collaborative data processing
KR20160112816A (en) Accident information manage apparatus, vehicle having the same and method for managing accident information
WO2018205844A1 (en) Video surveillance device, surveillance server, and system
CN113607185B (en) Lane line information display method, lane line information display device, electronic device, and computer-readable medium
CN110972085B (en) Information interaction method, device, storage medium, equipment and system
US11722847B2 (en) Method and apparatus for augmented reality service in wireless communication system
CN110880235A (en) Road side equipment in road condition information processing system, processing method and device
US10803332B2 (en) Traffic sign detection method, apparatus, system and medium
CN104333564A (en) Target operation method, system and device
CN113115216B (en) Indoor positioning method, service management server and computer storage medium
CN111314651A (en) Road condition display method and system based on V2X technology, V2X terminal and V2X server
CN114419572B (en) Multi-radar target detection method and device, electronic equipment and storage medium
Liu et al. Towards vehicle-to-everything autonomous driving: A survey on collaborative perception
CN113269168B (en) Obstacle data processing method and device, electronic equipment and computer readable medium
CN112232451B (en) Multi-sensor data fusion method and device, electronic equipment and medium
CN112595728A (en) Road problem determination method and related device
CN113312403B (en) Map acquisition method and device, electronic equipment and storage medium
Ahmed et al. A Joint Perception Scheme For Connected Vehicles
CN111047890B (en) Vehicle driving decision method and device, medium and equipment for intelligent driving
CN116338604A (en) Data processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211208

Address after: 215000 room 808, 8 / F, building 9a, launch area of Yangtze River Delta International R & D community, No. 286, qinglonggang Road, high speed rail new town, Xiangcheng District, Suzhou City, Jiangsu Province

Patentee after: Tianyi Transportation Technology Co.,Ltd.

Address before: 2nd floor, building A3, Hongfeng science and Technology Park, Nanjing Economic and Technological Development Zone, Nanjing, Jiangsu Province 210033

Patentee before: CIIC Technology Co.,Ltd.

TR01 Transfer of patent right
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210420

Assignee: Zhongzhixing (Shanghai) Transportation Technology Co.,Ltd.

Assignor: Tianyi Transportation Technology Co.,Ltd.

Contract record no.: X2022980005387

Denomination of invention: Perceptual data acquisition method and device

Granted publication date: 20210525

License type: Common License

Record date: 20220518

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210420

Assignee: CIIC Technology Co.,Ltd.

Assignor: Tianyi Transportation Technology Co.,Ltd.

Contract record no.: X2022980005922

Denomination of invention: Perceptual data acquisition method and device

Granted publication date: 20210525

License type: Common License

Record date: 20220524

EE01 Entry into force of recordation of patent licensing contract